Test Report: KVM_Linux_crio 19672

                    
                      d6d2a37830b251a8a712eec07ee86a534797346d:2024-09-20:36302
                    
                

Test fail (30/317)

Order failed test Duration
33 TestAddons/parallel/Registry 74.84
34 TestAddons/parallel/Ingress 151.4
36 TestAddons/parallel/MetricsServer 356.2
163 TestMultiControlPlane/serial/StopSecondaryNode 141.52
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.73
165 TestMultiControlPlane/serial/RestartSecondaryNode 6.43
167 TestMultiControlPlane/serial/RestartClusterKeepsNodes 368.45
170 TestMultiControlPlane/serial/StopCluster 141.91
230 TestMultiNode/serial/RestartKeepsNodes 331.9
232 TestMultiNode/serial/StopMultiNode 144.86
239 TestPreload 270.75
247 TestKubernetesUpgrade 411.66
290 TestStartStop/group/old-k8s-version/serial/FirstStart 285
297 TestStartStop/group/no-preload/serial/Stop 139.36
302 TestStartStop/group/embed-certs/serial/Stop 139.05
305 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.15
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
308 TestStartStop/group/old-k8s-version/serial/DeployApp 0.5
309 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 102.32
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
312 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
316 TestStartStop/group/old-k8s-version/serial/SecondStart 705.53
317 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.29
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.44
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.37
320 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.52
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 461.39
322 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 514.09
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 334.57
324 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 160.15
x
+
TestAddons/parallel/Registry (74.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.277807ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-6dc7n" [19bd1a7e-1f50-4be0-b5ef-365eff973102] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.037355207s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-75xfb" [d9752b94-3a87-4a8d-a156-b0f4baae0181] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004332171s
addons_test.go:338: (dbg) Run:  kubectl --context addons-628754 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-628754 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-628754 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.105647325s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-628754 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 ip
2024/09/20 20:59:23 [DEBUG] GET http://192.168.39.204:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-628754 -n addons-628754
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-628754 logs -n 25: (2.05827541s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-437637 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | -p download-only-437637                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-437637                                                                     | download-only-437637 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| start   | -o=json --download-only                                                                     | download-only-454496 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | -p download-only-454496                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-454496                                                                     | download-only-454496 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-437637                                                                     | download-only-437637 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-454496                                                                     | download-only-454496 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-545297 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | binary-mirror-545297                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41175                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-545297                                                                     | binary-mirror-545297 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| addons  | disable dashboard -p                                                                        | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | addons-628754                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | addons-628754                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-628754 --wait=true                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:50 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | -p addons-628754                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | -p addons-628754                                                                            |                      |         |         |                     |                     |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-628754 ssh cat                                                                       | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | /opt/local-path-provisioner/pvc-3e8f244a-7a2e-48ce-baa7-c2ca4ae064da_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | addons-628754                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC | 20 Sep 24 20:59 UTC |
	|         | addons-628754                                                                               |                      |         |         |                     |                     |
	| addons  | addons-628754 addons                                                                        | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-628754 ip                                                                            | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC | 20 Sep 24 20:59 UTC |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC | 20 Sep 24 20:59 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:47:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:47:57.083784   17426 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:47:57.084049   17426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:57.084058   17426 out.go:358] Setting ErrFile to fd 2...
	I0920 20:47:57.084063   17426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:57.084249   17426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 20:47:57.084816   17426 out.go:352] Setting JSON to false
	I0920 20:47:57.085616   17426 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1825,"bootTime":1726863452,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:47:57.085733   17426 start.go:139] virtualization: kvm guest
	I0920 20:47:57.088022   17426 out.go:177] * [addons-628754] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 20:47:57.089697   17426 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 20:47:57.089699   17426 notify.go:220] Checking for updates...
	I0920 20:47:57.091121   17426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:47:57.092364   17426 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 20:47:57.093781   17426 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 20:47:57.095208   17426 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 20:47:57.096741   17426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 20:47:57.098321   17426 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:47:57.129581   17426 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 20:47:57.130933   17426 start.go:297] selected driver: kvm2
	I0920 20:47:57.131005   17426 start.go:901] validating driver "kvm2" against <nil>
	I0920 20:47:57.131026   17426 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 20:47:57.131969   17426 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:57.132061   17426 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 20:47:57.147486   17426 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 20:47:57.147545   17426 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 20:47:57.147780   17426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:47:57.147811   17426 cni.go:84] Creating CNI manager for ""
	I0920 20:47:57.147852   17426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 20:47:57.147859   17426 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 20:47:57.147906   17426 start.go:340] cluster config:
	{Name:addons-628754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:47:57.148009   17426 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:57.150689   17426 out.go:177] * Starting "addons-628754" primary control-plane node in "addons-628754" cluster
	I0920 20:47:57.151657   17426 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 20:47:57.151699   17426 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 20:47:57.151709   17426 cache.go:56] Caching tarball of preloaded images
	I0920 20:47:57.151783   17426 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 20:47:57.151793   17426 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 20:47:57.152110   17426 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/config.json ...
	I0920 20:47:57.152131   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/config.json: {Name:mkca579941f55a7fbb4bc1fe5edc46778dec65b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:47:57.152274   17426 start.go:360] acquireMachinesLock for addons-628754: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 20:47:57.152320   17426 start.go:364] duration metric: took 31.79µs to acquireMachinesLock for "addons-628754"
	I0920 20:47:57.152337   17426 start.go:93] Provisioning new machine with config: &{Name:addons-628754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 20:47:57.152388   17426 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 20:47:57.153857   17426 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 20:47:57.153989   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:47:57.154024   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:47:57.168714   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I0920 20:47:57.169174   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:47:57.169774   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:47:57.169793   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:47:57.170182   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:47:57.170355   17426 main.go:141] libmachine: (addons-628754) Calling .GetMachineName
	I0920 20:47:57.170487   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:47:57.170634   17426 start.go:159] libmachine.API.Create for "addons-628754" (driver="kvm2")
	I0920 20:47:57.170665   17426 client.go:168] LocalClient.Create starting
	I0920 20:47:57.170716   17426 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 20:47:57.285508   17426 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 20:47:57.498317   17426 main.go:141] libmachine: Running pre-create checks...
	I0920 20:47:57.498341   17426 main.go:141] libmachine: (addons-628754) Calling .PreCreateCheck
	I0920 20:47:57.498846   17426 main.go:141] libmachine: (addons-628754) Calling .GetConfigRaw
	I0920 20:47:57.499269   17426 main.go:141] libmachine: Creating machine...
	I0920 20:47:57.499283   17426 main.go:141] libmachine: (addons-628754) Calling .Create
	I0920 20:47:57.499443   17426 main.go:141] libmachine: (addons-628754) Creating KVM machine...
	I0920 20:47:57.500685   17426 main.go:141] libmachine: (addons-628754) DBG | found existing default KVM network
	I0920 20:47:57.501409   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.501274   17448 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0920 20:47:57.501455   17426 main.go:141] libmachine: (addons-628754) DBG | created network xml: 
	I0920 20:47:57.501475   17426 main.go:141] libmachine: (addons-628754) DBG | <network>
	I0920 20:47:57.501483   17426 main.go:141] libmachine: (addons-628754) DBG |   <name>mk-addons-628754</name>
	I0920 20:47:57.501488   17426 main.go:141] libmachine: (addons-628754) DBG |   <dns enable='no'/>
	I0920 20:47:57.501495   17426 main.go:141] libmachine: (addons-628754) DBG |   
	I0920 20:47:57.501501   17426 main.go:141] libmachine: (addons-628754) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 20:47:57.501506   17426 main.go:141] libmachine: (addons-628754) DBG |     <dhcp>
	I0920 20:47:57.501513   17426 main.go:141] libmachine: (addons-628754) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 20:47:57.501518   17426 main.go:141] libmachine: (addons-628754) DBG |     </dhcp>
	I0920 20:47:57.501522   17426 main.go:141] libmachine: (addons-628754) DBG |   </ip>
	I0920 20:47:57.501529   17426 main.go:141] libmachine: (addons-628754) DBG |   
	I0920 20:47:57.501533   17426 main.go:141] libmachine: (addons-628754) DBG | </network>
	I0920 20:47:57.501539   17426 main.go:141] libmachine: (addons-628754) DBG | 
	I0920 20:47:57.508174   17426 main.go:141] libmachine: (addons-628754) DBG | trying to create private KVM network mk-addons-628754 192.168.39.0/24...
	I0920 20:47:57.573213   17426 main.go:141] libmachine: (addons-628754) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754 ...
	I0920 20:47:57.573243   17426 main.go:141] libmachine: (addons-628754) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 20:47:57.573254   17426 main.go:141] libmachine: (addons-628754) DBG | private KVM network mk-addons-628754 192.168.39.0/24 created
	I0920 20:47:57.573273   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.573184   17448 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 20:47:57.573460   17426 main.go:141] libmachine: (addons-628754) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 20:47:57.830461   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.830307   17448 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa...
	I0920 20:47:57.903386   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.903247   17448 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/addons-628754.rawdisk...
	I0920 20:47:57.903428   17426 main.go:141] libmachine: (addons-628754) DBG | Writing magic tar header
	I0920 20:47:57.903439   17426 main.go:141] libmachine: (addons-628754) DBG | Writing SSH key tar header
	I0920 20:47:57.903449   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.903380   17448 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754 ...
	I0920 20:47:57.903466   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754
	I0920 20:47:57.903508   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754 (perms=drwx------)
	I0920 20:47:57.903526   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 20:47:57.903538   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 20:47:57.903554   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 20:47:57.903628   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 20:47:57.903663   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 20:47:57.903677   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 20:47:57.903688   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 20:47:57.903699   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 20:47:57.903713   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins
	I0920 20:47:57.903723   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home
	I0920 20:47:57.903736   17426 main.go:141] libmachine: (addons-628754) DBG | Skipping /home - not owner
	I0920 20:47:57.903755   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 20:47:57.903769   17426 main.go:141] libmachine: (addons-628754) Creating domain...
	I0920 20:47:57.904753   17426 main.go:141] libmachine: (addons-628754) define libvirt domain using xml: 
	I0920 20:47:57.904778   17426 main.go:141] libmachine: (addons-628754) <domain type='kvm'>
	I0920 20:47:57.904788   17426 main.go:141] libmachine: (addons-628754)   <name>addons-628754</name>
	I0920 20:47:57.904826   17426 main.go:141] libmachine: (addons-628754)   <memory unit='MiB'>4000</memory>
	I0920 20:47:57.904840   17426 main.go:141] libmachine: (addons-628754)   <vcpu>2</vcpu>
	I0920 20:47:57.904849   17426 main.go:141] libmachine: (addons-628754)   <features>
	I0920 20:47:57.904882   17426 main.go:141] libmachine: (addons-628754)     <acpi/>
	I0920 20:47:57.904904   17426 main.go:141] libmachine: (addons-628754)     <apic/>
	I0920 20:47:57.904935   17426 main.go:141] libmachine: (addons-628754)     <pae/>
	I0920 20:47:57.904957   17426 main.go:141] libmachine: (addons-628754)     
	I0920 20:47:57.904970   17426 main.go:141] libmachine: (addons-628754)   </features>
	I0920 20:47:57.904997   17426 main.go:141] libmachine: (addons-628754)   <cpu mode='host-passthrough'>
	I0920 20:47:57.905010   17426 main.go:141] libmachine: (addons-628754)   
	I0920 20:47:57.905023   17426 main.go:141] libmachine: (addons-628754)   </cpu>
	I0920 20:47:57.905034   17426 main.go:141] libmachine: (addons-628754)   <os>
	I0920 20:47:57.905043   17426 main.go:141] libmachine: (addons-628754)     <type>hvm</type>
	I0920 20:47:57.905052   17426 main.go:141] libmachine: (addons-628754)     <boot dev='cdrom'/>
	I0920 20:47:57.905061   17426 main.go:141] libmachine: (addons-628754)     <boot dev='hd'/>
	I0920 20:47:57.905071   17426 main.go:141] libmachine: (addons-628754)     <bootmenu enable='no'/>
	I0920 20:47:57.905078   17426 main.go:141] libmachine: (addons-628754)   </os>
	I0920 20:47:57.905087   17426 main.go:141] libmachine: (addons-628754)   <devices>
	I0920 20:47:57.905096   17426 main.go:141] libmachine: (addons-628754)     <disk type='file' device='cdrom'>
	I0920 20:47:57.905111   17426 main.go:141] libmachine: (addons-628754)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/boot2docker.iso'/>
	I0920 20:47:57.905125   17426 main.go:141] libmachine: (addons-628754)       <target dev='hdc' bus='scsi'/>
	I0920 20:47:57.905135   17426 main.go:141] libmachine: (addons-628754)       <readonly/>
	I0920 20:47:57.905150   17426 main.go:141] libmachine: (addons-628754)     </disk>
	I0920 20:47:57.905168   17426 main.go:141] libmachine: (addons-628754)     <disk type='file' device='disk'>
	I0920 20:47:57.905191   17426 main.go:141] libmachine: (addons-628754)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 20:47:57.905212   17426 main.go:141] libmachine: (addons-628754)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/addons-628754.rawdisk'/>
	I0920 20:47:57.905226   17426 main.go:141] libmachine: (addons-628754)       <target dev='hda' bus='virtio'/>
	I0920 20:47:57.905236   17426 main.go:141] libmachine: (addons-628754)     </disk>
	I0920 20:47:57.905246   17426 main.go:141] libmachine: (addons-628754)     <interface type='network'>
	I0920 20:47:57.905258   17426 main.go:141] libmachine: (addons-628754)       <source network='mk-addons-628754'/>
	I0920 20:47:57.905267   17426 main.go:141] libmachine: (addons-628754)       <model type='virtio'/>
	I0920 20:47:57.905277   17426 main.go:141] libmachine: (addons-628754)     </interface>
	I0920 20:47:57.905292   17426 main.go:141] libmachine: (addons-628754)     <interface type='network'>
	I0920 20:47:57.905308   17426 main.go:141] libmachine: (addons-628754)       <source network='default'/>
	I0920 20:47:57.905320   17426 main.go:141] libmachine: (addons-628754)       <model type='virtio'/>
	I0920 20:47:57.905335   17426 main.go:141] libmachine: (addons-628754)     </interface>
	I0920 20:47:57.905345   17426 main.go:141] libmachine: (addons-628754)     <serial type='pty'>
	I0920 20:47:57.905351   17426 main.go:141] libmachine: (addons-628754)       <target port='0'/>
	I0920 20:47:57.905362   17426 main.go:141] libmachine: (addons-628754)     </serial>
	I0920 20:47:57.905372   17426 main.go:141] libmachine: (addons-628754)     <console type='pty'>
	I0920 20:47:57.905387   17426 main.go:141] libmachine: (addons-628754)       <target type='serial' port='0'/>
	I0920 20:47:57.905404   17426 main.go:141] libmachine: (addons-628754)     </console>
	I0920 20:47:57.905414   17426 main.go:141] libmachine: (addons-628754)     <rng model='virtio'>
	I0920 20:47:57.905426   17426 main.go:141] libmachine: (addons-628754)       <backend model='random'>/dev/random</backend>
	I0920 20:47:57.905436   17426 main.go:141] libmachine: (addons-628754)     </rng>
	I0920 20:47:57.905445   17426 main.go:141] libmachine: (addons-628754)     
	I0920 20:47:57.905453   17426 main.go:141] libmachine: (addons-628754)     
	I0920 20:47:57.905463   17426 main.go:141] libmachine: (addons-628754)   </devices>
	I0920 20:47:57.905471   17426 main.go:141] libmachine: (addons-628754) </domain>
	I0920 20:47:57.905483   17426 main.go:141] libmachine: (addons-628754) 
	I0920 20:47:57.911357   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:e6:b4:6f in network default
	I0920 20:47:57.911872   17426 main.go:141] libmachine: (addons-628754) Ensuring networks are active...
	I0920 20:47:57.911889   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:47:57.912569   17426 main.go:141] libmachine: (addons-628754) Ensuring network default is active
	I0920 20:47:57.912863   17426 main.go:141] libmachine: (addons-628754) Ensuring network mk-addons-628754 is active
	I0920 20:47:57.913288   17426 main.go:141] libmachine: (addons-628754) Getting domain xml...
	I0920 20:47:57.913898   17426 main.go:141] libmachine: (addons-628754) Creating domain...
	I0920 20:47:59.326755   17426 main.go:141] libmachine: (addons-628754) Waiting to get IP...
	I0920 20:47:59.327557   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:47:59.328005   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:47:59.328033   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:59.327972   17448 retry.go:31] will retry after 302.559317ms: waiting for machine to come up
	I0920 20:47:59.632500   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:47:59.632880   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:47:59.632917   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:59.632857   17448 retry.go:31] will retry after 377.023743ms: waiting for machine to come up
	I0920 20:48:00.011524   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:00.012077   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:00.012099   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:00.012026   17448 retry.go:31] will retry after 369.422393ms: waiting for machine to come up
	I0920 20:48:00.382598   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:00.383102   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:00.383131   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:00.383057   17448 retry.go:31] will retry after 534.800716ms: waiting for machine to come up
	I0920 20:48:00.919797   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:00.920325   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:00.920348   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:00.920289   17448 retry.go:31] will retry after 493.667737ms: waiting for machine to come up
	I0920 20:48:01.416002   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:01.416381   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:01.416402   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:01.416339   17448 retry.go:31] will retry after 621.848692ms: waiting for machine to come up
	I0920 20:48:02.040290   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:02.040677   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:02.040701   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:02.040643   17448 retry.go:31] will retry after 861.306251ms: waiting for machine to come up
	I0920 20:48:02.903325   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:02.903717   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:02.903735   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:02.903678   17448 retry.go:31] will retry after 1.459954527s: waiting for machine to come up
	I0920 20:48:04.365414   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:04.365855   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:04.365878   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:04.365791   17448 retry.go:31] will retry after 1.358721041s: waiting for machine to come up
	I0920 20:48:05.726477   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:05.726947   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:05.726992   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:05.726912   17448 retry.go:31] will retry after 1.859066738s: waiting for machine to come up
	I0920 20:48:07.588072   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:07.588572   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:07.588597   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:07.588527   17448 retry.go:31] will retry after 2.363120827s: waiting for machine to come up
	I0920 20:48:09.953935   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:09.954415   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:09.954440   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:09.954387   17448 retry.go:31] will retry after 2.276156744s: waiting for machine to come up
	I0920 20:48:12.232531   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:12.232978   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:12.233000   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:12.232935   17448 retry.go:31] will retry after 3.494675482s: waiting for machine to come up
	I0920 20:48:15.728788   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:15.729264   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:15.729297   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:15.729233   17448 retry.go:31] will retry after 3.623474403s: waiting for machine to come up
	I0920 20:48:19.353890   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.354300   17426 main.go:141] libmachine: (addons-628754) Found IP for machine: 192.168.39.204
	I0920 20:48:19.354318   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has current primary IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.354324   17426 main.go:141] libmachine: (addons-628754) Reserving static IP address...
	I0920 20:48:19.354718   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find host DHCP lease matching {name: "addons-628754", mac: "52:54:00:59:f8:c3", ip: "192.168.39.204"} in network mk-addons-628754
	I0920 20:48:19.430989   17426 main.go:141] libmachine: (addons-628754) DBG | Getting to WaitForSSH function...
	I0920 20:48:19.431022   17426 main.go:141] libmachine: (addons-628754) Reserved static IP address: 192.168.39.204
	I0920 20:48:19.431035   17426 main.go:141] libmachine: (addons-628754) Waiting for SSH to be available...
	I0920 20:48:19.433368   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.433729   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.433754   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.433893   17426 main.go:141] libmachine: (addons-628754) DBG | Using SSH client type: external
	I0920 20:48:19.433951   17426 main.go:141] libmachine: (addons-628754) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa (-rw-------)
	I0920 20:48:19.433990   17426 main.go:141] libmachine: (addons-628754) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.204 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 20:48:19.434007   17426 main.go:141] libmachine: (addons-628754) DBG | About to run SSH command:
	I0920 20:48:19.434034   17426 main.go:141] libmachine: (addons-628754) DBG | exit 0
	I0920 20:48:19.567000   17426 main.go:141] libmachine: (addons-628754) DBG | SSH cmd err, output: <nil>: 
	I0920 20:48:19.567274   17426 main.go:141] libmachine: (addons-628754) KVM machine creation complete!
	I0920 20:48:19.567605   17426 main.go:141] libmachine: (addons-628754) Calling .GetConfigRaw
	I0920 20:48:19.568125   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:19.568301   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:19.568423   17426 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 20:48:19.568447   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:19.569781   17426 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 20:48:19.569796   17426 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 20:48:19.569803   17426 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 20:48:19.569812   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:19.572230   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.572609   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.572636   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.572734   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:19.572904   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.573062   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.573263   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:19.573417   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:19.573596   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:19.573606   17426 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 20:48:19.686484   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 20:48:19.686510   17426 main.go:141] libmachine: Detecting the provisioner...
	I0920 20:48:19.686520   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:19.689488   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.689939   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.689963   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.690171   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:19.690367   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.690533   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.690675   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:19.690829   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:19.691037   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:19.691051   17426 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 20:48:19.803894   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 20:48:19.803943   17426 main.go:141] libmachine: found compatible host: buildroot
	I0920 20:48:19.803949   17426 main.go:141] libmachine: Provisioning with buildroot...
	I0920 20:48:19.803959   17426 main.go:141] libmachine: (addons-628754) Calling .GetMachineName
	I0920 20:48:19.804229   17426 buildroot.go:166] provisioning hostname "addons-628754"
	I0920 20:48:19.804257   17426 main.go:141] libmachine: (addons-628754) Calling .GetMachineName
	I0920 20:48:19.804429   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:19.807930   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.808301   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.808325   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.808470   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:19.808640   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.808753   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.808855   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:19.808986   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:19.809178   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:19.809194   17426 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-628754 && echo "addons-628754" | sudo tee /etc/hostname
	I0920 20:48:19.933979   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-628754
	
	I0920 20:48:19.934010   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:19.936671   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.936966   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.937020   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.937216   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:19.937412   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.937572   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.937681   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:19.937835   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:19.938005   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:19.938027   17426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-628754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-628754/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-628754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 20:48:20.060621   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 20:48:20.060648   17426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 20:48:20.060666   17426 buildroot.go:174] setting up certificates
	I0920 20:48:20.060675   17426 provision.go:84] configureAuth start
	I0920 20:48:20.060683   17426 main.go:141] libmachine: (addons-628754) Calling .GetMachineName
	I0920 20:48:20.060932   17426 main.go:141] libmachine: (addons-628754) Calling .GetIP
	I0920 20:48:20.063555   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.063906   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.063931   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.064077   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.066288   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.066581   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.066599   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.066772   17426 provision.go:143] copyHostCerts
	I0920 20:48:20.066842   17426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 20:48:20.067014   17426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 20:48:20.067102   17426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 20:48:20.067175   17426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.addons-628754 san=[127.0.0.1 192.168.39.204 addons-628754 localhost minikube]
	I0920 20:48:20.207459   17426 provision.go:177] copyRemoteCerts
	I0920 20:48:20.207541   17426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 20:48:20.207568   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.210051   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.210438   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.210465   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.210641   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.210815   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.210936   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.211089   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:20.297517   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 20:48:20.323502   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 20:48:20.348382   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 20:48:20.372714   17426 provision.go:87] duration metric: took 312.024389ms to configureAuth
	I0920 20:48:20.372743   17426 buildroot.go:189] setting minikube options for container-runtime
	I0920 20:48:20.372974   17426 config.go:182] Loaded profile config "addons-628754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 20:48:20.373093   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.375745   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.376103   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.376129   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.376286   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.376456   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.376577   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.376741   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.376885   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:20.377069   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:20.377088   17426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 20:48:20.603620   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 20:48:20.603645   17426 main.go:141] libmachine: Checking connection to Docker...
	I0920 20:48:20.603654   17426 main.go:141] libmachine: (addons-628754) Calling .GetURL
	I0920 20:48:20.605014   17426 main.go:141] libmachine: (addons-628754) DBG | Using libvirt version 6000000
	I0920 20:48:20.607455   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.607777   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.607807   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.607965   17426 main.go:141] libmachine: Docker is up and running!
	I0920 20:48:20.607980   17426 main.go:141] libmachine: Reticulating splines...
	I0920 20:48:20.607989   17426 client.go:171] duration metric: took 23.437316144s to LocalClient.Create
	I0920 20:48:20.608018   17426 start.go:167] duration metric: took 23.437384362s to libmachine.API.Create "addons-628754"
	I0920 20:48:20.608030   17426 start.go:293] postStartSetup for "addons-628754" (driver="kvm2")
	I0920 20:48:20.608041   17426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 20:48:20.608073   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.608293   17426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 20:48:20.608319   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.610317   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.610627   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.610652   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.610781   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.610921   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.611034   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.611124   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:20.698116   17426 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 20:48:20.702771   17426 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 20:48:20.702801   17426 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 20:48:20.702873   17426 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 20:48:20.702896   17426 start.go:296] duration metric: took 94.859468ms for postStartSetup
	I0920 20:48:20.702927   17426 main.go:141] libmachine: (addons-628754) Calling .GetConfigRaw
	I0920 20:48:20.703462   17426 main.go:141] libmachine: (addons-628754) Calling .GetIP
	I0920 20:48:20.705754   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.706028   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.706043   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.706282   17426 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/config.json ...
	I0920 20:48:20.706477   17426 start.go:128] duration metric: took 23.554079183s to createHost
	I0920 20:48:20.706501   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.708765   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.709058   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.709077   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.709229   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.709401   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.709532   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.709626   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.709751   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:20.709947   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:20.709959   17426 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 20:48:20.824097   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726865300.788019293
	
	I0920 20:48:20.824119   17426 fix.go:216] guest clock: 1726865300.788019293
	I0920 20:48:20.824129   17426 fix.go:229] Guest: 2024-09-20 20:48:20.788019293 +0000 UTC Remote: 2024-09-20 20:48:20.706488068 +0000 UTC m=+23.656717453 (delta=81.531225ms)
	I0920 20:48:20.824169   17426 fix.go:200] guest clock delta is within tolerance: 81.531225ms
	I0920 20:48:20.824174   17426 start.go:83] releasing machines lock for "addons-628754", held for 23.671844938s
	I0920 20:48:20.824195   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.824499   17426 main.go:141] libmachine: (addons-628754) Calling .GetIP
	I0920 20:48:20.827015   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.827394   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.827425   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.827590   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.828104   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.828268   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.828372   17426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 20:48:20.828427   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.828488   17426 ssh_runner.go:195] Run: cat /version.json
	I0920 20:48:20.828511   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.831137   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.831303   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.831481   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.831511   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.831658   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.831693   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.831734   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.831898   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.831916   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.832049   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.832060   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.832158   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.832182   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:20.832264   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:20.937268   17426 ssh_runner.go:195] Run: systemctl --version
	I0920 20:48:20.943278   17426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 20:48:21.102731   17426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 20:48:21.109306   17426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 20:48:21.109385   17426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 20:48:21.125977   17426 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 20:48:21.126016   17426 start.go:495] detecting cgroup driver to use...
	I0920 20:48:21.126077   17426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 20:48:21.141923   17426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 20:48:21.156336   17426 docker.go:217] disabling cri-docker service (if available) ...
	I0920 20:48:21.156395   17426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 20:48:21.170041   17426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 20:48:21.183756   17426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 20:48:21.294187   17426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 20:48:21.437230   17426 docker.go:233] disabling docker service ...
	I0920 20:48:21.437293   17426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 20:48:21.452740   17426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 20:48:21.465572   17426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 20:48:21.594165   17426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 20:48:21.703184   17426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 20:48:21.716853   17426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 20:48:21.735742   17426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 20:48:21.735807   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.746697   17426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 20:48:21.746783   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.758620   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.769552   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.780345   17426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 20:48:21.791319   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.802437   17426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.820563   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.831706   17426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 20:48:21.841879   17426 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 20:48:21.841934   17426 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 20:48:21.855580   17426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 20:48:21.865881   17426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:21.975704   17426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 20:48:22.076074   17426 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 20:48:22.076172   17426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 20:48:22.081028   17426 start.go:563] Will wait 60s for crictl version
	I0920 20:48:22.081104   17426 ssh_runner.go:195] Run: which crictl
	I0920 20:48:22.085024   17426 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 20:48:22.123994   17426 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 20:48:22.124131   17426 ssh_runner.go:195] Run: crio --version
	I0920 20:48:22.155608   17426 ssh_runner.go:195] Run: crio --version
	I0920 20:48:22.254304   17426 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 20:48:22.271322   17426 main.go:141] libmachine: (addons-628754) Calling .GetIP
	I0920 20:48:22.274135   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:22.274494   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:22.274519   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:22.274758   17426 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 20:48:22.279562   17426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:48:22.292693   17426 kubeadm.go:883] updating cluster {Name:addons-628754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 20:48:22.292826   17426 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 20:48:22.292872   17426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 20:48:22.328943   17426 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 20:48:22.329008   17426 ssh_runner.go:195] Run: which lz4
	I0920 20:48:22.333252   17426 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 20:48:22.337720   17426 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 20:48:22.337751   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 20:48:23.694739   17426 crio.go:462] duration metric: took 1.361518572s to copy over tarball
	I0920 20:48:23.694808   17426 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 20:48:25.883886   17426 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18904765s)
	I0920 20:48:25.883920   17426 crio.go:469] duration metric: took 2.18915492s to extract the tarball
	I0920 20:48:25.883930   17426 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 20:48:25.922070   17426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 20:48:25.969998   17426 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 20:48:25.970026   17426 cache_images.go:84] Images are preloaded, skipping loading
	I0920 20:48:25.970037   17426 kubeadm.go:934] updating node { 192.168.39.204 8443 v1.31.1 crio true true} ...
	I0920 20:48:25.970130   17426 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-628754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 20:48:25.970191   17426 ssh_runner.go:195] Run: crio config
	I0920 20:48:26.021952   17426 cni.go:84] Creating CNI manager for ""
	I0920 20:48:26.021980   17426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 20:48:26.021993   17426 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 20:48:26.022019   17426 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.204 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-628754 NodeName:addons-628754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 20:48:26.022191   17426 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-628754"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 20:48:26.022274   17426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 20:48:26.032477   17426 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 20:48:26.032538   17426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 20:48:26.042683   17426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 20:48:26.059732   17426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 20:48:26.076258   17426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0920 20:48:26.093707   17426 ssh_runner.go:195] Run: grep 192.168.39.204	control-plane.minikube.internal$ /etc/hosts
	I0920 20:48:26.097791   17426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:48:26.110716   17426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:26.236386   17426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 20:48:26.253722   17426 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754 for IP: 192.168.39.204
	I0920 20:48:26.253745   17426 certs.go:194] generating shared ca certs ...
	I0920 20:48:26.253764   17426 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.253934   17426 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 20:48:26.484930   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt ...
	I0920 20:48:26.484965   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt: {Name:mke2d7aeeb7db3e9800a1083a3ff2f86790ad6b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.485165   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key ...
	I0920 20:48:26.485179   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key: {Name:mk17ffdadb7b9aab721bb9d1c788c9a2ddb298de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.485278   17426 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 20:48:26.719056   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt ...
	I0920 20:48:26.719086   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt: {Name:mkfada96f4c7b2fd8274928ed1dbb16b11a88c17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.719248   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key ...
	I0920 20:48:26.719258   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key: {Name:mkd5ebdfcf48f4c7d47e8f4a6008f58fc89c2fa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.719323   17426 certs.go:256] generating profile certs ...
	I0920 20:48:26.719397   17426 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.key
	I0920 20:48:26.719413   17426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt with IP's: []
	I0920 20:48:26.933060   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt ...
	I0920 20:48:26.933092   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: {Name:mk91a397bc4ff329a96b4b263f0366897ac67b3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.933260   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.key ...
	I0920 20:48:26.933270   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.key: {Name:mkb7eb1952eb7eecce91764549c5ad4831a240c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.933342   17426 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key.23a8a0fd
	I0920 20:48:26.933369   17426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt.23a8a0fd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.204]
	I0920 20:48:27.039700   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt.23a8a0fd ...
	I0920 20:48:27.039729   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt.23a8a0fd: {Name:mke2e260dbaf2f40e6a6aae6719316d894241e83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:27.039890   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key.23a8a0fd ...
	I0920 20:48:27.039902   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key.23a8a0fd: {Name:mk8e28dd00075b54e6ff8d4a4eca983c0a2a96b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:27.039979   17426 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt.23a8a0fd -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt
	I0920 20:48:27.040049   17426 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key.23a8a0fd -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key
	I0920 20:48:27.040093   17426 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.key
	I0920 20:48:27.040110   17426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.crt with IP's: []
	I0920 20:48:27.264693   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.crt ...
	I0920 20:48:27.264722   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.crt: {Name:mkb317a338bad2a13df21d64fc6002a23cf2c57f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:27.264890   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.key ...
	I0920 20:48:27.264900   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.key: {Name:mk1033c85ac3998dbc1def471b2b99da31018275 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:27.265070   17426 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 20:48:27.265101   17426 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 20:48:27.265124   17426 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 20:48:27.265145   17426 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 20:48:27.265681   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 20:48:27.295419   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 20:48:27.327322   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 20:48:27.354123   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 20:48:27.380181   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 20:48:27.409469   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 20:48:27.434423   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 20:48:27.459334   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 20:48:27.486398   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 20:48:27.514949   17426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 20:48:27.535057   17426 ssh_runner.go:195] Run: openssl version
	I0920 20:48:27.541556   17426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 20:48:27.552841   17426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:27.557488   17426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:27.557542   17426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:27.563591   17426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 20:48:27.574585   17426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 20:48:27.578724   17426 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 20:48:27.578767   17426 kubeadm.go:392] StartCluster: {Name:addons-628754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:48:27.578828   17426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 20:48:27.578888   17426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 20:48:27.614372   17426 cri.go:89] found id: ""
	I0920 20:48:27.614428   17426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 20:48:27.624616   17426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 20:48:27.634583   17426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 20:48:27.644397   17426 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 20:48:27.644424   17426 kubeadm.go:157] found existing configuration files:
	
	I0920 20:48:27.644475   17426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 20:48:27.654186   17426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 20:48:27.654252   17426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 20:48:27.663878   17426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 20:48:27.673440   17426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 20:48:27.673505   17426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 20:48:27.683173   17426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 20:48:27.692642   17426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 20:48:27.692697   17426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 20:48:27.703397   17426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 20:48:27.713535   17426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 20:48:27.713604   17426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 20:48:27.723779   17426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 20:48:27.782861   17426 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 20:48:27.782945   17426 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 20:48:27.906000   17426 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 20:48:27.906163   17426 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 20:48:27.906287   17426 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 20:48:27.919732   17426 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 20:48:27.997360   17426 out.go:235]   - Generating certificates and keys ...
	I0920 20:48:27.997466   17426 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 20:48:27.997553   17426 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 20:48:28.094857   17426 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 20:48:28.239479   17426 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 20:48:28.395927   17426 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 20:48:28.720325   17426 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 20:48:28.850828   17426 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 20:48:28.851063   17426 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-628754 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	I0920 20:48:28.998676   17426 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 20:48:28.998936   17426 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-628754 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	I0920 20:48:29.508367   17426 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 20:48:29.566861   17426 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 20:48:29.658573   17426 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 20:48:29.658676   17426 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 20:48:30.255811   17426 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 20:48:30.388840   17426 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 20:48:30.636127   17426 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 20:48:30.741849   17426 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 20:48:30.809198   17426 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 20:48:30.809704   17426 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 20:48:30.812136   17426 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 20:48:30.814442   17426 out.go:235]   - Booting up control plane ...
	I0920 20:48:30.814560   17426 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 20:48:30.814659   17426 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 20:48:30.814748   17426 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 20:48:30.829867   17426 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 20:48:30.836797   17426 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 20:48:30.837005   17426 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 20:48:30.959791   17426 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 20:48:30.959951   17426 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 20:48:31.461582   17426 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.116569ms
	I0920 20:48:31.461681   17426 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 20:48:36.962881   17426 kubeadm.go:310] [api-check] The API server is healthy after 5.50286333s
	I0920 20:48:36.984905   17426 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 20:48:37.018104   17426 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 20:48:37.059205   17426 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 20:48:37.059469   17426 kubeadm.go:310] [mark-control-plane] Marking the node addons-628754 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 20:48:37.076513   17426 kubeadm.go:310] [bootstrap-token] Using token: bdg12k.viyp3bffzs7386tv
	I0920 20:48:37.078129   17426 out.go:235]   - Configuring RBAC rules ...
	I0920 20:48:37.078263   17426 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 20:48:37.084820   17426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 20:48:37.097641   17426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 20:48:37.105580   17426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 20:48:37.114381   17426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 20:48:37.118116   17426 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 20:48:37.369422   17426 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 20:48:37.806370   17426 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 20:48:38.367972   17426 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 20:48:38.369871   17426 kubeadm.go:310] 
	I0920 20:48:38.369946   17426 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 20:48:38.369957   17426 kubeadm.go:310] 
	I0920 20:48:38.370072   17426 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 20:48:38.370082   17426 kubeadm.go:310] 
	I0920 20:48:38.370118   17426 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 20:48:38.370196   17426 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 20:48:38.370272   17426 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 20:48:38.370283   17426 kubeadm.go:310] 
	I0920 20:48:38.370376   17426 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 20:48:38.370400   17426 kubeadm.go:310] 
	I0920 20:48:38.370460   17426 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 20:48:38.370469   17426 kubeadm.go:310] 
	I0920 20:48:38.370530   17426 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 20:48:38.370630   17426 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 20:48:38.370742   17426 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 20:48:38.370757   17426 kubeadm.go:310] 
	I0920 20:48:38.370878   17426 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 20:48:38.370978   17426 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 20:48:38.370989   17426 kubeadm.go:310] 
	I0920 20:48:38.371071   17426 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bdg12k.viyp3bffzs7386tv \
	I0920 20:48:38.371162   17426 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 20:48:38.371182   17426 kubeadm.go:310] 	--control-plane 
	I0920 20:48:38.371188   17426 kubeadm.go:310] 
	I0920 20:48:38.371275   17426 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 20:48:38.371291   17426 kubeadm.go:310] 
	I0920 20:48:38.371414   17426 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bdg12k.viyp3bffzs7386tv \
	I0920 20:48:38.371569   17426 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 20:48:38.373923   17426 kubeadm.go:310] W0920 20:48:27.749585     811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 20:48:38.374277   17426 kubeadm.go:310] W0920 20:48:27.750640     811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 20:48:38.374448   17426 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 20:48:38.374477   17426 cni.go:84] Creating CNI manager for ""
	I0920 20:48:38.374486   17426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 20:48:38.376506   17426 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 20:48:38.377736   17426 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 20:48:38.391254   17426 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 20:48:38.410759   17426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 20:48:38.410828   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:38.410916   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-628754 minikube.k8s.io/updated_at=2024_09_20T20_48_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=addons-628754 minikube.k8s.io/primary=true
	I0920 20:48:38.438031   17426 ops.go:34] apiserver oom_adj: -16
	I0920 20:48:38.537631   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:39.038754   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:39.537782   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:40.038712   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:40.538203   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:41.038038   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:41.538335   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:42.038436   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:42.537928   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:42.637542   17426 kubeadm.go:1113] duration metric: took 4.226768279s to wait for elevateKubeSystemPrivileges
	I0920 20:48:42.637585   17426 kubeadm.go:394] duration metric: took 15.05882001s to StartCluster
	I0920 20:48:42.637611   17426 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:42.637771   17426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 20:48:42.638195   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:42.638402   17426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 20:48:42.638407   17426 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 20:48:42.638474   17426 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 20:48:42.638588   17426 addons.go:69] Setting yakd=true in profile "addons-628754"
	I0920 20:48:42.638602   17426 config.go:182] Loaded profile config "addons-628754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 20:48:42.638606   17426 addons.go:69] Setting inspektor-gadget=true in profile "addons-628754"
	I0920 20:48:42.638622   17426 addons.go:69] Setting volcano=true in profile "addons-628754"
	I0920 20:48:42.638632   17426 addons.go:234] Setting addon volcano=true in "addons-628754"
	I0920 20:48:42.638613   17426 addons.go:234] Setting addon yakd=true in "addons-628754"
	I0920 20:48:42.638660   17426 addons.go:69] Setting metrics-server=true in profile "addons-628754"
	I0920 20:48:42.638668   17426 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-628754"
	I0920 20:48:42.638682   17426 addons.go:69] Setting gcp-auth=true in profile "addons-628754"
	I0920 20:48:42.638663   17426 addons.go:69] Setting cloud-spanner=true in profile "addons-628754"
	I0920 20:48:42.638683   17426 addons.go:69] Setting volumesnapshots=true in profile "addons-628754"
	I0920 20:48:42.638694   17426 addons.go:69] Setting ingress=true in profile "addons-628754"
	I0920 20:48:42.638704   17426 addons.go:234] Setting addon cloud-spanner=true in "addons-628754"
	I0920 20:48:42.638706   17426 addons.go:234] Setting addon volumesnapshots=true in "addons-628754"
	I0920 20:48:42.638709   17426 addons.go:234] Setting addon ingress=true in "addons-628754"
	I0920 20:48:42.638710   17426 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-628754"
	I0920 20:48:42.638712   17426 addons.go:69] Setting default-storageclass=true in profile "addons-628754"
	I0920 20:48:42.638726   17426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-628754"
	I0920 20:48:42.638731   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638732   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638739   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638747   17426 addons.go:69] Setting ingress-dns=true in profile "addons-628754"
	I0920 20:48:42.638759   17426 addons.go:234] Setting addon ingress-dns=true in "addons-628754"
	I0920 20:48:42.638783   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638849   17426 addons.go:69] Setting registry=true in profile "addons-628754"
	I0920 20:48:42.638879   17426 addons.go:234] Setting addon registry=true in "addons-628754"
	I0920 20:48:42.638907   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638687   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638706   17426 mustload.go:65] Loading cluster: addons-628754
	I0920 20:48:42.639189   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639194   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639204   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.638669   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.639211   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639222   17426 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-628754"
	I0920 20:48:42.639215   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639229   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639234   17426 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-628754"
	I0920 20:48:42.639239   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639259   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638657   17426 addons.go:234] Setting addon inspektor-gadget=true in "addons-628754"
	I0920 20:48:42.638623   17426 addons.go:69] Setting storage-provisioner=true in profile "addons-628754"
	I0920 20:48:42.639295   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639303   17426 addons.go:234] Setting addon storage-provisioner=true in "addons-628754"
	I0920 20:48:42.639310   17426 config.go:182] Loaded profile config "addons-628754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 20:48:42.639322   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639328   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638618   17426 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-628754"
	I0920 20:48:42.639348   17426 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-628754"
	I0920 20:48:42.639357   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.638741   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638687   17426 addons.go:234] Setting addon metrics-server=true in "addons-628754"
	I0920 20:48:42.639327   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639559   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639635   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639647   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639654   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639207   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639687   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.639669   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639692   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639689   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639712   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639774   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639778   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639792   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639796   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639925   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639957   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.640112   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.640835   17426 out.go:177] * Verifying Kubernetes components...
	I0920 20:48:42.642673   17426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:42.660941   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42189
	I0920 20:48:42.667687   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35235
	I0920 20:48:42.667692   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.667839   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.678558   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.678612   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.678621   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.678756   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.679309   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.679336   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.679311   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.679408   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.679771   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.680337   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.680363   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.680576   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.690668   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0920 20:48:42.691311   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.691872   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.691890   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.692290   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.692477   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.694008   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35761
	I0920 20:48:42.697482   17426 addons.go:234] Setting addon default-storageclass=true in "addons-628754"
	I0920 20:48:42.697526   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.697886   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.697921   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.699272   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.699624   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.699675   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.699924   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.699947   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.700602   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.700795   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.701070   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I0920 20:48:42.701245   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I0920 20:48:42.701882   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.702429   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.702446   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.702516   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41419
	I0920 20:48:42.702666   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.703662   17426 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-628754"
	I0920 20:48:42.703696   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.704048   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.704078   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.704398   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.704413   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.705415   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.705489   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.705551   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.705944   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.706011   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.706556   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.706575   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.707457   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.707477   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.708190   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.708812   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.708852   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.709335   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0920 20:48:42.709840   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.710499   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.710518   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.710878   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.711395   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.711432   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.717610   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39885
	I0920 20:48:42.718243   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.718856   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.718894   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.719298   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.719840   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.719876   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.725177   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I0920 20:48:42.725418   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I0920 20:48:42.725937   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.726018   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.726549   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.726565   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.726909   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.727514   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.727549   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.727836   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.727849   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.730675   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36279
	I0920 20:48:42.730906   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
	I0920 20:48:42.731307   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.731389   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.731827   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.731849   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.732193   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.732707   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.732748   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.735536   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.735557   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.735557   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.736097   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.736137   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.736396   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.736991   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.737017   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.744129   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0920 20:48:42.744671   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.747553   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0920 20:48:42.748011   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.749026   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.749052   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.749449   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.750408   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39107
	I0920 20:48:42.750584   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.750999   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.751611   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.751627   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.751764   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.751776   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.751844   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0920 20:48:42.752245   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.752314   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.752542   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.752592   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.753503   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I0920 20:48:42.753974   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.754124   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0920 20:48:42.754789   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.754806   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.754922   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.754954   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.754979   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.755347   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.755748   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.755778   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.756451   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.756504   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.756710   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.756809   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0920 20:48:42.756911   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.756956   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.756972   17426 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 20:48:42.757147   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.757160   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.757573   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.757601   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.757786   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.757848   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.757957   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.758203   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.758567   17426 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 20:48:42.758586   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 20:48:42.758609   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.759930   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.759949   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.760329   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.760522   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.762149   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0920 20:48:42.763177   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.763632   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.764071   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.764089   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.764362   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.764542   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.764721   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.764829   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.764982   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 20:48:42.765250   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.765505   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:42.765517   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:42.766293   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 20:48:42.766312   17426 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 20:48:42.766331   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.767282   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:42.767602   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:42.767617   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:42.767629   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:42.767635   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:42.768023   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.768620   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.768636   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.769033   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.769606   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.769647   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.770046   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.770055   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.770081   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.770105   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.770202   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.770375   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.770438   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:42.770446   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 20:48:42.770526   17426 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 20:48:42.770669   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.772010   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.773576   17426 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 20:48:42.774641   17426 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 20:48:42.774658   17426 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 20:48:42.774678   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.775024   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45223
	I0920 20:48:42.775556   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.776138   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.776153   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.777044   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.777261   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.777562   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38529
	I0920 20:48:42.777798   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.778169   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.778272   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.778289   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.779104   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.779270   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.779278   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.779293   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.779465   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.779657   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.779942   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.780180   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.781516   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45031
	I0920 20:48:42.781890   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.781970   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.782665   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.784310   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46705
	I0920 20:48:42.784396   17426 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 20:48:42.784455   17426 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 20:48:42.784798   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.785170   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0920 20:48:42.785585   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.785870   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.785888   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.786003   17426 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 20:48:42.786021   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 20:48:42.786040   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.786160   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.786170   17426 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 20:48:42.786174   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.786179   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 20:48:42.786192   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.786754   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.786805   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.786848   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.786871   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.787069   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.787456   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.787704   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.787764   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.788455   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I0920 20:48:42.788995   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.789603   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.789631   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.790117   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.790898   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.790930   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.790970   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.791037   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.791058   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.791396   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.791464   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.791556   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.791750   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.791781   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.791892   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.792728   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.792749   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.792788   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.792821   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.793062   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.793246   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.793367   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.794000   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 20:48:42.794771   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.795365   17426 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 20:48:42.795406   17426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 20:48:42.796390   17426 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 20:48:42.797256   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33525
	I0920 20:48:42.797777   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 20:48:42.797865   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 20:48:42.797883   17426 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 20:48:42.797902   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.799447   17426 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 20:48:42.799589   17426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:42.800122   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.801616   17426 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 20:48:42.801632   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 20:48:42.801653   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.801713   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 20:48:42.802118   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.802173   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.802806   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0920 20:48:42.803029   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.803221   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I0920 20:48:42.803404   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.803411   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.803584   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43141
	I0920 20:48:42.803663   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.803723   17426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:42.803906   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.803922   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.804042   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.804056   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.804059   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.804245   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.804521   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.804585   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.804639   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.804782   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.804793   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.804844   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.804970   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 20:48:42.805144   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.805349   17426 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 20:48:42.805330   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.805367   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 20:48:42.805386   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.805528   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.805581   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.805711   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.805727   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.806112   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.806210   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.806321   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.806672   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.806695   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.806377   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.806874   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.807029   17426 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 20:48:42.807167   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.807229   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.807513   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.808311   17426 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:48:42.808334   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 20:48:42.808352   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.809310   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 20:48:42.811107   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 20:48:42.811156   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.811412   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.811454   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.811479   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.811499   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.811515   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.812860   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.812904   17426 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 20:48:42.813048   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I0920 20:48:42.813072   17426 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 20:48:42.813465   17426 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 20:48:42.813488   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.813075   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.813656   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.814613   17426 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 20:48:42.814632   17426 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 20:48:42.814649   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.814848   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.815274   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 20:48:42.815352   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.815377   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.815679   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.815732   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.815942   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.816341   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.816374   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.816631   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.816813   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.816997   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.817143   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.817549   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 20:48:42.818037   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.818075   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.818471   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.818494   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.818631   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.818744   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 20:48:42.818758   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 20:48:42.818767   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.818776   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.818906   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.819052   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.819435   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.819491   17426 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 20:48:42.819891   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.819917   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.820019   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.820163   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.820412   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.820530   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.821660   17426 out.go:177]   - Using image docker.io/busybox:stable
	I0920 20:48:42.821707   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.822065   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.822084   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.822234   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.822379   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.822499   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.822604   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.822845   17426 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 20:48:42.822864   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 20:48:42.822891   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.825409   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.826320   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.826340   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.826490   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.826650   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.826791   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.826997   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:43.153916   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 20:48:43.168073   17426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 20:48:43.168113   17426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 20:48:43.291676   17426 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 20:48:43.291703   17426 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 20:48:43.315362   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 20:48:43.315391   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 20:48:43.316712   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 20:48:43.334248   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:48:43.348744   17426 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 20:48:43.348770   17426 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 20:48:43.355224   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 20:48:43.380424   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 20:48:43.380456   17426 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 20:48:43.394590   17426 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 20:48:43.394621   17426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 20:48:43.398170   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 20:48:43.398196   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 20:48:43.428507   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 20:48:43.436406   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 20:48:43.461703   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 20:48:43.498907   17426 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 20:48:43.498930   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 20:48:43.574065   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 20:48:43.574091   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 20:48:43.575337   17426 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 20:48:43.575356   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 20:48:43.577797   17426 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 20:48:43.577814   17426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 20:48:43.633050   17426 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 20:48:43.633074   17426 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 20:48:43.634773   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 20:48:43.634794   17426 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 20:48:43.724101   17426 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 20:48:43.724129   17426 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 20:48:43.756985   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 20:48:43.757016   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 20:48:43.773054   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 20:48:43.788001   17426 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 20:48:43.788031   17426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 20:48:43.885553   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 20:48:43.885575   17426 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 20:48:43.888832   17426 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 20:48:43.888852   17426 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 20:48:43.950930   17426 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:48:43.950977   17426 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 20:48:43.992296   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 20:48:43.992318   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 20:48:44.002913   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 20:48:44.002938   17426 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 20:48:44.086858   17426 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 20:48:44.086885   17426 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 20:48:44.087873   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 20:48:44.087888   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 20:48:44.166057   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 20:48:44.166083   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 20:48:44.250028   17426 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:44.250057   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 20:48:44.341452   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:48:44.487857   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 20:48:44.491856   17426 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 20:48:44.491882   17426 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 20:48:44.577907   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 20:48:44.577939   17426 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 20:48:44.667346   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:44.955208   17426 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 20:48:44.955243   17426 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 20:48:44.999353   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 20:48:44.999376   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 20:48:45.209150   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 20:48:45.209172   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 20:48:45.262741   17426 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 20:48:45.262773   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 20:48:45.491632   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 20:48:45.491665   17426 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 20:48:45.602976   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 20:48:45.772919   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 20:48:46.676907   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.522947186s)
	I0920 20:48:46.676970   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:46.676965   17426 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.50885438s)
	I0920 20:48:46.676983   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:46.676991   17426 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.508845802s)
	I0920 20:48:46.677011   17426 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 20:48:46.677250   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:46.677312   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:46.677351   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:46.677279   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:46.677361   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:46.677562   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:46.677575   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:46.677980   17426 node_ready.go:35] waiting up to 6m0s for node "addons-628754" to be "Ready" ...
	I0920 20:48:46.718417   17426 node_ready.go:49] node "addons-628754" has status "Ready":"True"
	I0920 20:48:46.718446   17426 node_ready.go:38] duration metric: took 40.437292ms for node "addons-628754" to be "Ready" ...
	I0920 20:48:46.718457   17426 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:48:46.792590   17426 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:47.364664   17426 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-628754" context rescaled to 1 replicas
	I0920 20:48:48.649858   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.333111598s)
	I0920 20:48:48.649919   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.649931   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.649939   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.315657166s)
	I0920 20:48:48.649974   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.649986   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650035   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.294786194s)
	I0920 20:48:48.650101   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.650114   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650278   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.650325   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.650395   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.650409   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.650420   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650365   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.650483   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.650494   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.650502   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650281   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.650425   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.650655   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.650663   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.650670   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650458   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.650706   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.650736   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.650743   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.650777   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.652127   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.652143   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.652308   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.652326   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.652333   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.858610   17426 pod_ready.go:103] pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:48.896047   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.896071   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.896417   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.896440   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:49.836065   17426 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 20:48:49.836110   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:49.838684   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:49.839092   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:49.839121   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:49.839312   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:49.839498   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:49.839673   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:49.839822   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:50.396937   17426 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 20:48:50.507257   17426 addons.go:234] Setting addon gcp-auth=true in "addons-628754"
	I0920 20:48:50.507314   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:50.507599   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:50.507638   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:50.522912   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I0920 20:48:50.523406   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:50.523934   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:50.523954   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:50.524273   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:50.524773   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:50.524796   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:50.540243   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I0920 20:48:50.540743   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:50.541204   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:50.541230   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:50.541545   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:50.541736   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:50.543277   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:50.543496   17426 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 20:48:50.543517   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:50.546450   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:50.546948   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:50.546998   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:50.547180   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:50.547367   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:50.547520   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:50.547677   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:50.940912   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.512363058s)
	I0920 20:48:50.940953   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.504514111s)
	I0920 20:48:50.940967   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.940980   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.940984   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.940994   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941024   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.479289353s)
	I0920 20:48:50.941058   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941073   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941075   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.167991227s)
	I0920 20:48:50.941094   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941110   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941144   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.599655439s)
	I0920 20:48:50.941166   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941177   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941213   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.453326238s)
	I0920 20:48:50.941232   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941249   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941456   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941460   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941475   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941490   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941495   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941499   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941502   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941512   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941515   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941523   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941532   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941532   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941532   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941541   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941543   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941549   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941549   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941555   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941518   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941567   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941568   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941576   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941584   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941590   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941569   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941603   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941610   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941613   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941621   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941887   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941903   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941926   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941935   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941944   17426 addons.go:475] Verifying addon registry=true in "addons-628754"
	I0920 20:48:50.942201   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.942209   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.944058   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.944089   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.944096   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.944105   17426 addons.go:475] Verifying addon metrics-server=true in "addons-628754"
	I0920 20:48:50.944376   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.944388   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.944397   17426 addons.go:475] Verifying addon ingress=true in "addons-628754"
	I0920 20:48:50.944525   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.944535   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.944664   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.944693   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.944699   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.945140   17426 out.go:177] * Verifying registry addon...
	I0920 20:48:50.945952   17426 pod_ready.go:103] pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:50.946263   17426 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-628754 service yakd-dashboard -n yakd-dashboard
	
	I0920 20:48:50.947404   17426 out.go:177] * Verifying ingress addon...
	I0920 20:48:50.948365   17426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 20:48:50.949906   17426 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 20:48:51.024842   17426 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 20:48:51.024862   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:51.067642   17426 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 20:48:51.067672   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:51.189554   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:51.189583   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:51.189883   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:51.189903   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:51.189907   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:51.485820   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:51.486047   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:51.508824   17426 pod_ready.go:93] pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:51.508855   17426 pod_ready.go:82] duration metric: took 4.716232797s for pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:51.508867   17426 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:51.565560   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.898160297s)
	W0920 20:48:51.565610   17426 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 20:48:51.565618   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.96259978s)
	I0920 20:48:51.565638   17426 retry.go:31] will retry after 289.771206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 20:48:51.565665   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:51.565681   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:51.565928   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:51.565982   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:51.565995   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:51.566029   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:51.566041   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:51.566248   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:51.566263   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:51.566271   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:51.856017   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:51.967451   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:51.967668   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:52.482714   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:52.483315   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:52.577722   17426 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.034201768s)
	I0920 20:48:52.577966   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.80498182s)
	I0920 20:48:52.578008   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:52.578045   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:52.578273   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:52.578285   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:52.578293   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:52.578299   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:52.578504   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:52.578523   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:52.578537   17426 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-628754"
	I0920 20:48:52.579487   17426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:52.580427   17426 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 20:48:52.581909   17426 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 20:48:52.582957   17426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 20:48:52.583079   17426 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 20:48:52.583105   17426 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 20:48:52.594165   17426 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 20:48:52.594198   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:52.744237   17426 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 20:48:52.744290   17426 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 20:48:52.782030   17426 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 20:48:52.782055   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 20:48:52.812476   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 20:48:52.953254   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:52.955366   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:53.088530   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:53.452082   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:53.454025   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:53.518057   17426 pod_ready.go:103] pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:53.592525   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:53.952666   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:53.954796   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:54.088578   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:54.137461   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.281390873s)
	I0920 20:48:54.137517   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:54.137536   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:54.137804   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:54.137827   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:54.137837   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:54.137845   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:54.137851   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:54.138105   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:54.138122   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:54.467785   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:54.488772   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:54.542327   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.729811085s)
	I0920 20:48:54.542374   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:54.542390   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:54.542681   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:54.542702   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:54.542701   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:54.542722   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:54.542729   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:54.542936   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:54.542997   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:54.543017   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:54.544656   17426 addons.go:475] Verifying addon gcp-auth=true in "addons-628754"
	I0920 20:48:54.546479   17426 out.go:177] * Verifying gcp-auth addon...
	I0920 20:48:54.548531   17426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 20:48:54.573659   17426 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 20:48:54.573678   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:54.617981   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:54.952495   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:54.960055   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:55.061661   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:55.091945   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:55.458616   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:55.459309   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:55.516457   17426 pod_ready.go:98] pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.204 HostIPs:[{IP:192.168.39
.204}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 20:48:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 20:48:46 +0000 UTC,FinishedAt:2024-09-20 20:48:53 +0000 UTC,ContainerID:cri-o://463342b7c77627b06d1c4c605bf2b5c802584bb5e7490211a8eca01ab7d99a30,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://463342b7c77627b06d1c4c605bf2b5c802584bb5e7490211a8eca01ab7d99a30 Started:0xc0026dd950 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001e01120} {Name:kube-api-access-vqz7n MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001e01130}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 20:48:55.516485   17426 pod_ready.go:82] duration metric: took 4.007610514s for pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace to be "Ready" ...
	E0920 20:48:55.516499   17426 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.204 HostIPs:[{IP:192.168.39.204}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 20:48:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 20:48:46 +0000 UTC,FinishedAt:2024-09-20 20:48:53 +0000 UTC,ContainerID:cri-o://463342b7c77627b06d1c4c605bf2b5c802584bb5e7490211a8eca01ab7d99a30,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://463342b7c77627b06d1c4c605bf2b5c802584bb5e7490211a8eca01ab7d99a30 Started:0xc0026dd950 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001e01120} {Name:kube-api-access-vqz7n MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001e01130}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 20:48:55.516516   17426 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.520826   17426 pod_ready.go:93] pod "etcd-addons-628754" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.520844   17426 pod_ready.go:82] duration metric: took 4.317233ms for pod "etcd-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.520857   17426 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.526814   17426 pod_ready.go:93] pod "kube-apiserver-addons-628754" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.526835   17426 pod_ready.go:82] duration metric: took 5.969545ms for pod "kube-apiserver-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.526846   17426 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.534816   17426 pod_ready.go:93] pod "kube-controller-manager-addons-628754" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.534841   17426 pod_ready.go:82] duration metric: took 7.987084ms for pod "kube-controller-manager-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.534855   17426 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ck9m7" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.541017   17426 pod_ready.go:93] pod "kube-proxy-ck9m7" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.541043   17426 pod_ready.go:82] duration metric: took 6.180196ms for pod "kube-proxy-ck9m7" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.541055   17426 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.551619   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:55.593184   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:55.913365   17426 pod_ready.go:93] pod "kube-scheduler-addons-628754" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.913386   17426 pod_ready.go:82] duration metric: took 372.323338ms for pod "kube-scheduler-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.913396   17426 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.955137   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:55.955137   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:56.051699   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:56.087775   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:56.452870   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:56.454722   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:56.551855   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:56.587689   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:56.953293   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:56.955457   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:57.051952   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:57.088662   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:57.452257   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:57.454267   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:57.552093   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:57.588708   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:57.922382   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:57.954489   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:57.955101   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:58.052836   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:58.088049   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:58.453756   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:58.454264   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:58.552172   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:58.588631   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:58.956248   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:58.956561   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:59.054141   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:59.091590   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:59.452442   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:59.457148   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:59.552573   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:59.587624   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:59.958079   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:59.959212   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:00.058727   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:00.170305   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:00.426857   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:00.452423   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:00.454540   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:00.552677   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:00.588021   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:00.952150   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:00.954190   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:01.052262   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:01.087142   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:01.452176   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:01.455052   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:01.552454   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:01.587458   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:01.954352   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:01.954591   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:02.052894   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:02.088280   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:02.453416   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:02.453710   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:02.552308   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:02.587514   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:02.919214   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:02.952365   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:02.954381   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:03.052491   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:03.087251   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:03.453233   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:03.455205   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:03.552645   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:03.588778   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:03.954362   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:03.954626   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:04.052848   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:04.088290   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:04.452525   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:04.455051   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:04.553267   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:04.587782   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:04.919682   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:04.951969   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:04.956425   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:05.052517   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:05.087781   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:05.452284   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:05.454925   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:05.553478   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:05.588581   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:05.953511   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:05.954297   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:06.051947   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:06.088183   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:06.453270   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:06.454990   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:06.553191   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:06.588354   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:06.920227   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:06.954430   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:06.954572   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:07.052484   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:07.087775   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:07.452418   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:07.454832   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:07.552476   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:07.587613   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:07.952831   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:07.954609   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:08.052698   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:08.088426   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:08.452555   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:08.453857   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:08.552734   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:08.587877   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:08.921045   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:08.953289   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:08.955044   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:09.058613   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:09.088441   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:09.454326   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:09.454844   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:09.552759   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:09.587902   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:09.920620   17426 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:09.920641   17426 pod_ready.go:82] duration metric: took 14.007238973s for pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:09.920649   17426 pod_ready.go:39] duration metric: took 23.202177695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:49:09.920663   17426 api_server.go:52] waiting for apiserver process to appear ...
	I0920 20:49:09.920712   17426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:09.947981   17426 api_server.go:72] duration metric: took 27.309549406s to wait for apiserver process to appear ...
	I0920 20:49:09.948011   17426 api_server.go:88] waiting for apiserver healthz status ...
	I0920 20:49:09.948029   17426 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I0920 20:49:09.956504   17426 api_server.go:279] https://192.168.39.204:8443/healthz returned 200:
	ok
	I0920 20:49:09.958166   17426 api_server.go:141] control plane version: v1.31.1
	I0920 20:49:09.958191   17426 api_server.go:131] duration metric: took 10.173299ms to wait for apiserver health ...
	I0920 20:49:09.958199   17426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 20:49:09.958813   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:09.961812   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:09.968812   17426 system_pods.go:59] 17 kube-system pods found
	I0920 20:49:09.968841   17426 system_pods.go:61] "coredns-7c65d6cfc9-2s7hq" [5aa81130-71b3-409f-b876-5cf2992f0699] Running
	I0920 20:49:09.968849   17426 system_pods.go:61] "csi-hostpath-attacher-0" [73d3535a-c32f-4b60-98d2-fa1c3236253d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 20:49:09.968855   17426 system_pods.go:61] "csi-hostpath-resizer-0" [01d9a848-dc32-4dda-8389-fd1d30d8e2b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 20:49:09.968863   17426 system_pods.go:61] "csi-hostpathplugin-7bhtl" [db9699fc-b3cc-4b93-8d9a-4df2159cdebc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 20:49:09.968867   17426 system_pods.go:61] "etcd-addons-628754" [fed171b7-a220-4459-bf38-9c1f660badcb] Running
	I0920 20:49:09.968871   17426 system_pods.go:61] "kube-apiserver-addons-628754" [7de28a68-441a-45ba-96a9-fe4a1100728a] Running
	I0920 20:49:09.968874   17426 system_pods.go:61] "kube-controller-manager-addons-628754" [9a7780c5-87a4-49b9-b198-ffa0f09f8040] Running
	I0920 20:49:09.968877   17426 system_pods.go:61] "kube-ingress-dns-minikube" [e534a7bd-e2bb-49ab-a14f-28256e31640f] Running
	I0920 20:49:09.968880   17426 system_pods.go:61] "kube-proxy-ck9m7" [6b23ed7f-65d0-419b-b039-6526301af08b] Running
	I0920 20:49:09.968883   17426 system_pods.go:61] "kube-scheduler-addons-628754" [81fb37ae-fc86-40a5-a2b3-874ab340a481] Running
	I0920 20:49:09.968887   17426 system_pods.go:61] "metrics-server-84c5f94fbc-jcnct" [d19e3126-c1ed-4af9-b7ab-80e6e4a06949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 20:49:09.968891   17426 system_pods.go:61] "nvidia-device-plugin-daemonset-whxlq" [08b37854-27d3-47e3-b7b8-1eece2ad9368] Running
	I0920 20:49:09.968896   17426 system_pods.go:61] "registry-66c9cd494c-6dc7n" [19bd1a7e-1f50-4be0-b5ef-365eff973102] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 20:49:09.968900   17426 system_pods.go:61] "registry-proxy-75xfb" [d9752b94-3a87-4a8d-a156-b0f4baae0181] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 20:49:09.968907   17426 system_pods.go:61] "snapshot-controller-56fcc65765-nnfk8" [94e58c25-9906-4f3f-a71d-3fce5e41f93f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:09.968912   17426 system_pods.go:61] "snapshot-controller-56fcc65765-nsvqk" [e2d4d57b-92af-4450-b3ea-1de9e09634c0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:09.968915   17426 system_pods.go:61] "storage-provisioner" [91deb30c-ef21-4a79-94e2-db81f24ab089] Running
	I0920 20:49:09.968921   17426 system_pods.go:74] duration metric: took 10.716452ms to wait for pod list to return data ...
	I0920 20:49:09.968929   17426 default_sa.go:34] waiting for default service account to be created ...
	I0920 20:49:09.971550   17426 default_sa.go:45] found service account: "default"
	I0920 20:49:09.971566   17426 default_sa.go:55] duration metric: took 2.63218ms for default service account to be created ...
	I0920 20:49:09.971573   17426 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 20:49:09.979493   17426 system_pods.go:86] 17 kube-system pods found
	I0920 20:49:09.979518   17426 system_pods.go:89] "coredns-7c65d6cfc9-2s7hq" [5aa81130-71b3-409f-b876-5cf2992f0699] Running
	I0920 20:49:09.979525   17426 system_pods.go:89] "csi-hostpath-attacher-0" [73d3535a-c32f-4b60-98d2-fa1c3236253d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 20:49:09.979532   17426 system_pods.go:89] "csi-hostpath-resizer-0" [01d9a848-dc32-4dda-8389-fd1d30d8e2b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 20:49:09.979538   17426 system_pods.go:89] "csi-hostpathplugin-7bhtl" [db9699fc-b3cc-4b93-8d9a-4df2159cdebc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 20:49:09.979546   17426 system_pods.go:89] "etcd-addons-628754" [fed171b7-a220-4459-bf38-9c1f660badcb] Running
	I0920 20:49:09.979551   17426 system_pods.go:89] "kube-apiserver-addons-628754" [7de28a68-441a-45ba-96a9-fe4a1100728a] Running
	I0920 20:49:09.979555   17426 system_pods.go:89] "kube-controller-manager-addons-628754" [9a7780c5-87a4-49b9-b198-ffa0f09f8040] Running
	I0920 20:49:09.979559   17426 system_pods.go:89] "kube-ingress-dns-minikube" [e534a7bd-e2bb-49ab-a14f-28256e31640f] Running
	I0920 20:49:09.979562   17426 system_pods.go:89] "kube-proxy-ck9m7" [6b23ed7f-65d0-419b-b039-6526301af08b] Running
	I0920 20:49:09.979567   17426 system_pods.go:89] "kube-scheduler-addons-628754" [81fb37ae-fc86-40a5-a2b3-874ab340a481] Running
	I0920 20:49:09.979572   17426 system_pods.go:89] "metrics-server-84c5f94fbc-jcnct" [d19e3126-c1ed-4af9-b7ab-80e6e4a06949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 20:49:09.979575   17426 system_pods.go:89] "nvidia-device-plugin-daemonset-whxlq" [08b37854-27d3-47e3-b7b8-1eece2ad9368] Running
	I0920 20:49:09.979581   17426 system_pods.go:89] "registry-66c9cd494c-6dc7n" [19bd1a7e-1f50-4be0-b5ef-365eff973102] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 20:49:09.979585   17426 system_pods.go:89] "registry-proxy-75xfb" [d9752b94-3a87-4a8d-a156-b0f4baae0181] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 20:49:09.979595   17426 system_pods.go:89] "snapshot-controller-56fcc65765-nnfk8" [94e58c25-9906-4f3f-a71d-3fce5e41f93f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:09.979600   17426 system_pods.go:89] "snapshot-controller-56fcc65765-nsvqk" [e2d4d57b-92af-4450-b3ea-1de9e09634c0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:09.979604   17426 system_pods.go:89] "storage-provisioner" [91deb30c-ef21-4a79-94e2-db81f24ab089] Running
	I0920 20:49:09.979610   17426 system_pods.go:126] duration metric: took 8.032198ms to wait for k8s-apps to be running ...
	I0920 20:49:09.979617   17426 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 20:49:09.979655   17426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 20:49:09.995230   17426 system_svc.go:56] duration metric: took 15.606321ms WaitForService to wait for kubelet
	I0920 20:49:09.995254   17426 kubeadm.go:582] duration metric: took 27.356825963s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:49:09.995272   17426 node_conditions.go:102] verifying NodePressure condition ...
	I0920 20:49:09.999350   17426 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 20:49:09.999371   17426 node_conditions.go:123] node cpu capacity is 2
	I0920 20:49:09.999383   17426 node_conditions.go:105] duration metric: took 4.107149ms to run NodePressure ...
	I0920 20:49:09.999393   17426 start.go:241] waiting for startup goroutines ...
	I0920 20:49:10.053460   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:10.087433   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:10.454100   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:10.454346   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:10.553327   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:10.587292   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:10.953419   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:10.953751   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:11.052530   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:11.087591   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:11.452819   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:11.454012   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:11.553210   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:11.588262   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:11.954176   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:11.956729   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:12.052811   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:12.088582   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:12.452880   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:12.456247   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:12.551887   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:12.588295   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:12.954541   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:12.954986   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:13.053227   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:13.088399   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:13.453995   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:13.454604   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:13.553146   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:13.588109   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:13.952535   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:13.954871   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:14.053058   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:14.088969   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:14.453183   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:14.456277   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:14.552749   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:14.587809   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:14.952580   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:14.954915   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:15.052901   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:15.088153   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:15.452376   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:15.454450   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:15.552178   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:15.590244   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:15.952288   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:15.954887   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:16.059820   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:16.089668   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:16.453534   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:16.455331   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:16.553285   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:16.587701   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:16.952987   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:16.955195   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:17.052783   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:17.088222   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:17.453994   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:17.455805   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:17.552334   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:17.588741   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:17.951963   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:17.954638   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:18.052247   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:18.088172   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:18.453070   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:18.454583   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:18.552893   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:18.587730   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.176219   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:19.176692   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:19.177743   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.178016   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:19.452551   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:19.455179   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:19.552929   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:19.588351   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.952550   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:19.954625   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:20.052996   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:20.088357   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:20.452557   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:20.455753   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:20.551985   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:20.587862   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:20.954138   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:20.955659   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:21.053752   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:21.088273   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:21.452533   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:21.454412   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:21.551961   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:21.587902   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.213019   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.214276   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:22.214460   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:22.214725   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:22.455160   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:22.455472   17426 kapi.go:107] duration metric: took 31.507106479s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 20:49:22.554860   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:22.587605   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.959147   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:23.056319   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:23.088920   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:23.454497   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:23.552505   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:23.587969   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:23.954323   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:24.051816   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:24.088130   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:24.456274   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:24.555427   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:24.588693   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:24.955063   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:25.052728   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:25.087985   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:25.456210   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:25.552574   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:25.587488   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:25.955116   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:26.054687   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:26.087638   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:26.626115   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:26.626483   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:26.628344   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:26.954240   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:27.052307   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:27.087762   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:27.454250   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:27.552081   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:27.589592   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:27.953915   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:28.052492   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:28.088707   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:28.455291   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:28.551661   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:28.599028   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:28.956684   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:29.052143   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:29.089901   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:29.454734   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:29.555646   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:29.588017   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:29.954631   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:30.052315   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:30.089604   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:30.453995   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:30.552431   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:30.587569   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:30.955286   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:31.052545   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:31.087284   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:31.455202   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:31.551700   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:31.587298   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:31.954692   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:32.052682   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:32.093227   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:32.457841   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:32.557548   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:32.658776   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:32.954760   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:33.052426   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:33.087335   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:33.455135   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:33.552450   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:33.599114   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:33.954722   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:34.051989   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:34.087593   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:34.453655   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:34.552523   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:34.596357   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:34.955321   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:35.052589   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:35.087503   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:35.455616   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:35.552302   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:35.592075   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:35.956755   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:36.056028   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:36.157593   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:36.455543   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:36.551889   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:36.595187   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:36.955775   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:37.052828   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:37.088401   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:37.454574   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:37.552734   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:37.588058   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:37.955848   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:38.053475   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:38.087807   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:38.453781   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:38.551921   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:38.588049   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:38.956439   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:39.287727   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:39.288266   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:39.455413   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:39.556117   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:39.587698   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:39.954228   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:40.052587   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:40.088235   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:40.454685   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:40.552222   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:40.587307   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:40.965032   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:41.069064   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:41.169661   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:41.456252   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:41.551748   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:41.587763   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:41.953862   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:42.054479   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:42.090521   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:42.454259   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:42.553452   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:42.588439   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:43.003849   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:43.052220   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:43.088192   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:43.454954   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:43.552351   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:43.588323   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:43.955159   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:44.053036   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:44.090032   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:44.454513   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:44.552038   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:44.588265   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:44.955285   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:45.054003   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:45.156329   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:45.454696   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:45.553050   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:45.588047   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:45.955073   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:46.061080   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:46.159743   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:46.454017   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:46.552354   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:46.587300   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:46.954698   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:47.054845   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:47.088317   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:47.455285   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:47.552981   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:47.598550   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:47.955384   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:48.054653   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:48.088277   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:48.454990   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:48.552663   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:48.587506   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:48.954620   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:49.051691   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:49.154600   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:49.454723   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:49.553679   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:49.587997   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:49.954402   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:50.058736   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:50.090947   17426 kapi.go:107] duration metric: took 57.507986835s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 20:49:50.455270   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:50.552676   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:50.954128   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:51.051625   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:51.455727   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:51.552466   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:51.955330   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:52.052011   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:52.454834   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:52.552770   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:52.954599   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:53.055449   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:53.455037   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:53.552423   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:53.955090   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:54.053025   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:54.454346   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:54.552755   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:54.956136   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:55.053157   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:55.455144   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:55.552459   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:55.954952   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:56.054107   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:56.598482   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:56.598807   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:56.955194   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:57.054341   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:57.456555   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:57.552580   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:57.954657   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:58.052499   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:58.454614   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:58.552294   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:58.954070   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:59.054450   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:59.594930   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:59.595375   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:59.953489   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:00.052196   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:00.454898   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:00.552561   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:00.954914   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:01.055034   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:01.454793   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:01.552037   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:01.955703   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:02.054975   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:02.454984   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:02.552692   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:02.961750   17426 kapi.go:107] duration metric: took 1m12.01184163s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 20:50:03.052368   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:03.552411   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:04.052493   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:04.552884   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:05.053084   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:05.555738   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:06.052349   17426 kapi.go:107] duration metric: took 1m11.503815049s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 20:50:06.054082   17426 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-628754 cluster.
	I0920 20:50:06.055505   17426 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 20:50:06.056936   17426 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 20:50:06.058346   17426 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, storage-provisioner-rancher, metrics-server, nvidia-device-plugin, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 20:50:06.059716   17426 addons.go:510] duration metric: took 1m23.421239851s for enable addons: enabled=[ingress-dns storage-provisioner cloud-spanner storage-provisioner-rancher metrics-server nvidia-device-plugin yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 20:50:06.059768   17426 start.go:246] waiting for cluster config update ...
	I0920 20:50:06.059789   17426 start.go:255] writing updated cluster config ...
	I0920 20:50:06.060115   17426 ssh_runner.go:195] Run: rm -f paused
	I0920 20:50:06.111505   17426 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 20:50:06.113737   17426 out.go:177] * Done! kubectl is now configured to use "addons-628754" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.948029735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d75f396-cf88-4c8d-b9bc-728aa69c55bf name=/runtime.v1.RuntimeService/Version
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.948191433Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d75f396-cf88-4c8d-b9bc-728aa69c55bf name=/runtime.v1.RuntimeService/Version
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.950099544Z" level=info msg="Removed container 52484418ab79688c33dbbb9183b84c0fb912067060fc4b84a9c5c86d81ca38d3: kube-system/registry-66c9cd494c-6dc7n/registry" file="server/container_remove.go:40" id=1705f80b-d8b8-429c-96ad-4db7f0f18a32 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.950246084Z" level=debug msg="Response: &RemoveContainerResponse{}" file="otel-collector/interceptors.go:74" id=1705f80b-d8b8-429c-96ad-4db7f0f18a32 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.951903259Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:d1476fd78d48eb6907b68a06902b76c8c9feb0674494d6c3ac5c0fc8425a4f93,Verbose:false,}" file="otel-collector/interceptors.go:62" id=75b350d3-3e99-4eb3-8b96-88ac49dd02b5 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.953227984Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:d1476fd78d48eb6907b68a06902b76c8c9feb0674494d6c3ac5c0fc8425a4f93,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1726865360780042591,StartedAt:1726865360807913873,FinishedAt:1726865963669014132,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,Reason:Completed,Message:,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-75xfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9752b94-3a87-4a8d-a156-b0f4baae0181,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.c
ontainer.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d9752b94-3a87-4a8d-a156-b0f4baae0181/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d9752b94-3a87-4a8d-a156-b0f4baae0181/containers/registry-proxy/5450e1ad,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/d9752b94-3a87-4a8d-a156-b0f4baae0181/volumes/kubernetes.io~projected/kube-api-access-5clpl,Rea
donly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_registry-proxy-75xfb_d9752b94-3a87-4a8d-a156-b0f4baae0181/registry-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=75b350d3-3e99-4eb3-8b96-88ac49dd02b5 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.954242449Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:d1476fd78d48eb6907b68a06902b76c8c9feb0674494d6c3ac5c0fc8425a4f93,Verbose:false,}" file="otel-collector/interceptors.go:62" id=b7f5ea43-aa3f-4606-ad34-f28f312c25ed name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.954539204Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:d1476fd78d48eb6907b68a06902b76c8c9feb0674494d6c3ac5c0fc8425a4f93,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},State:CONTAINER_EXITED,CreatedAt:1726865360780042591,StartedAt:1726865360807913873,FinishedAt:1726865963669014132,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,Reason:Completed,Message:,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-75xfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9752b94-3a87-4a8d-a156-b0f4baae0181,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.c
ontainer.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/d9752b94-3a87-4a8d-a156-b0f4baae0181/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/d9752b94-3a87-4a8d-a156-b0f4baae0181/containers/registry-proxy/5450e1ad,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/d9752b94-3a87-4a8d-a156-b0f4baae0181/volumes/kubernetes.io~projected/kube-api-access-5clpl,Rea
donly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_registry-proxy-75xfb_d9752b94-3a87-4a8d-a156-b0f4baae0181/registry-proxy/0.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=b7f5ea43-aa3f-4606-ad34-f28f312c25ed name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.960333536Z" level=debug msg="Request: &RemoveContainerRequest{ContainerId:d1476fd78d48eb6907b68a06902b76c8c9feb0674494d6c3ac5c0fc8425a4f93,}" file="otel-collector/interceptors.go:62" id=a91ba5dd-736a-4d58-adc9-9ff7431a8847 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.960574751Z" level=info msg="Removing container: d1476fd78d48eb6907b68a06902b76c8c9feb0674494d6c3ac5c0fc8425a4f93" file="server/container_remove.go:24" id=a91ba5dd-736a-4d58-adc9-9ff7431a8847 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.966004747Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8fb980f4-acd7-48c9-80c0-5a9024eaf989 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.968461362Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726865964968429352,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550631,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8fb980f4-acd7-48c9-80c0-5a9024eaf989 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.972091832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07e96726-06ad-4d0b-bd97-83b4fdaccb35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.972159704Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07e96726-06ad-4d0b-bd97-83b4fdaccb35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.972522566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb0e1c83764c66fa21b6e9b73bfc023553eff9aad817efa3006d65594bd33f01,PodSandboxId:8aca549d11d28b3b06bbedc4d78049c9a5571762eb6c5aa3a77e8cb995d60e62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726865957019764543,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db159a66-031e-40fe-aee0-8b2be90d02fc,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6e90ddc80f9e2033755b52a499b77eb67551eebdc520954077be24aee474ce,PodSandboxId:4e0a25a9933fefc6854572a5be419b41c6bc0aa70ac27fa1dd4f20cee506c738,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726865405428414027,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-cbzrf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a711db4d-67b7-4587-a77a-7b46017cc5ac,},Annotations:map[string]string{io.kubernetes.container.has
h: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c0ddf88e508d83be96a152a511a0ff6eed1c3737c08b137eaad5e7461c0d22,PodSandboxId:24e6aeb84bf650647c41c7d37352d6a4edae500af4c8a79035dc8293029551b1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726865401710012255,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-27www,io.kubernetes.pod.namespace: ingress-nginx
,io.kubernetes.pod.uid: 5e6262e3-eabd-42e7-8d63-a38f9521aa66,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:beed3a84c7c5ea044cf0bc8e69b00e19e95db3c686a0cf234c4f1e5a1478bc51,PodSandboxId:e4eddcad270f7168a7cbe391b8284a43a3c041dc33b5ec5b1dfb14f46b2d7abb,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726865387746402914,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gkvvq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 865d12b2-b990-48b4-aef4-53657d94a027,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:548bcb0e03044a386849ff2b8b2249f66b0d98559e5ccce6be5e82c3915d97e8,PodSandboxId:47fc5b3b604b3657acb7c83fb00146686e52d7acf1441a7b1fc20b0b4e115361,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6a
f68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726865376054581309,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-nsvqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2d4d57b-92af-4450-b3ea-1de9e09634c0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7eacbc7a27b8fbcf6b6d3171788408c55d4743ad098922818bb9e5bd37fdbdd,PodSandboxId:9117ebff270f082fe7d3fdeb4a5bd66ac7ca729ae4ffaa1af5abd1c76bd1ab24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sh
a256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726865375952988744,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n65gg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77866133-b28c-4906-88f8-f0ccc0eedfc5,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77071f61ac18fbe59412b0139def81c69b5b2dce121b97c58bc4182031dd183f,PodSandboxId:bb30536452af1e7dc849122f196840e25788907bae6c8646c9b8dc239b255c62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{I
mage:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726865375829719728,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-nnfk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e58c25-9906-4f3f-a71d-3fce5e41f93f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc78aa165fe91eaab2a361445ee07d7606872b7b1ff4a213fcad68d6e1d7c19,PodSandboxId:1e9f678a16f9ca9e6fc4f0510597ac4c5de929834cfc75219d3509a402b94a93,Metadata:&Contai
nerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726865362797068214,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-jcnct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19e3126-c1ed-4af9-b7ab-80e6e4a06949,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1476fd78d48eb6907
b68a06902b76c8c9feb0674494d6c3ac5c0fc8425a4f93,PodSandboxId:e102ceafc77a3f360d32f6c8c8e835e5d6b15ac3cee31d6e169b42844a3c30ea,Metadata:&ContainerMetadata{Name:registry-proxy,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/kube-registry-proxy@sha256:08dc5a48792f971b401d3758d4f37fd4af18aa2881668d65fa2c0b3bc61d7af4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38c5e506fa551ba5a1812dff63585e44b6c532dd4984b96f90944730f1c6e5c2,State:CONTAINER_EXITED,CreatedAt:1726865360717443029,Labels:map[string]string{io.kubernetes.container.name: registry-proxy,io.kubernetes.pod.name: registry-proxy-75xfb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9752b94-3a87-4a8d-a156-b0f4baae0181,},Annotations:map[string]string{io.kubernetes.container.hash: c90bc829,io.kubernetes.container.ports: [{\"name\":\"registry\",\"hostPort\":5000,\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2acd221f2863f5ce03bd56c3f24bcb9c01bc212bff2aa506d822d3f20c4916d,PodSandboxId:687656c9ac924d28f8236ff437a8955cb43e044fad87c5a67664cc7c84366717,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726865339948335310,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e534a7bd-e2bb-49ab-a14f-28256e31640f,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containe
rPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26a1572da03a177fa626061edd39e2d827d6846cbe46a95753c4c076551c8c6,PodSandboxId:18ef2311ef6406b64746c392b7fe5cf08e963b93125431c9df624864a35cf02c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726865330459366347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91deb30c-ef21-4a79-94e2-db81f24ab089,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb35b11d6adb6b86b758475e8a1207a6315fa916ff1fc0f2b9bd29e7b35f1167,PodSandboxId:71537944126f02bd91aa07c5bcd66e9bb3f52a3052d80a1e159b2c41d909d6a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726865325805204455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2s7hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa81130-71b3-409f-b876-5cf2992f0699,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubern
etes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628b1145299a31b0e420178de8fe9f2172d1d9a0ddb8206a4eac7ff3e173af06,PodSandboxId:eda25750507000c48f7feec33451823651eb62446ef78d035c6a9e07f510517d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726865323529000977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-
proxy-ck9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b23ed7f-65d0-419b-b039-6526301af08b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c22fecec8ece8b7118178c155707b750a52efe72682afae59b4ad2223c6bcd7,PodSandboxId:0c7ce0a0dbb52618cd5a51cefd71801bace99452ebe0edad062e6820d309dcf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726865312305509284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-628754,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: b73ae49e512e88edd7b3d51422ae7430,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f47006040cb11650131683d4882089c05f3a0b291129ff34a9d14f769bfb479,PodSandboxId:a48bfd7c50b60e26fb0b82bce4e3fe965d70e2653e7c19acc60f125fa263c92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726865312294235910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 95dd9bd765c15daee9bf57498638d246,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6308109d7cef9336f1417a99426481f325df44f67d2a5c8358c1db202ca4cc32,PodSandboxId:8a9d6431fae9243a48a73fa2f91d7db500980b57182afbb33257ba5a2fa67c78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726865312212359367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-628754,io.kubernetes.pod.namespace: kube-syst
em,io.kubernetes.pod.uid: aa63fad15c435d58f234d76f70de363b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d18650c60246b08f391f79236e28a5ab464c708213418917832eaaaeca31cee,PodSandboxId:7760f2e14947d769d4cf27b7ba70f433524f315b3c119d433b6ce30c0c280b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726865312172152136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: e42c9bd863c924c355ecd9ed32bf8b6b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07e96726-06ad-4d0b-bd97-83b4fdaccb35 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.976963103Z" level=debug msg="Unmounted container d1476fd78d48eb6907b68a06902b76c8c9feb0674494d6c3ac5c0fc8425a4f93" file="storage/runtime.go:495" id=a91ba5dd-736a-4d58-adc9-9ff7431a8847 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.987038875Z" level=info msg="Removed container d1476fd78d48eb6907b68a06902b76c8c9feb0674494d6c3ac5c0fc8425a4f93: kube-system/registry-proxy-75xfb/registry-proxy" file="server/container_remove.go:40" id=a91ba5dd-736a-4d58-adc9-9ff7431a8847 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 20:59:24 addons-628754 crio[661]: time="2024-09-20 20:59:24.987152895Z" level=debug msg="Response: &RemoveContainerResponse{}" file="otel-collector/interceptors.go:74" id=a91ba5dd-736a-4d58-adc9-9ff7431a8847 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 20 20:59:25 addons-628754 crio[661]: time="2024-09-20 20:59:25.028591862Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6a76b8e4-c40b-4caf-8083-dd6f09e92c25 name=/runtime.v1.RuntimeService/Version
	Sep 20 20:59:25 addons-628754 crio[661]: time="2024-09-20 20:59:25.028667615Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6a76b8e4-c40b-4caf-8083-dd6f09e92c25 name=/runtime.v1.RuntimeService/Version
	Sep 20 20:59:25 addons-628754 crio[661]: time="2024-09-20 20:59:25.032088946Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6ee91ab-81b6-4a4d-9b30-03050f4c0d69 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 20:59:25 addons-628754 crio[661]: time="2024-09-20 20:59:25.033262124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726865965033231086,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:550631,},InodesUsed:&UInt64Value{Value:187,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6ee91ab-81b6-4a4d-9b30-03050f4c0d69 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 20:59:25 addons-628754 crio[661]: time="2024-09-20 20:59:25.037621516Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3c0d78c-ba88-41c5-9a81-5bc89dbf8f2e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 20:59:25 addons-628754 crio[661]: time="2024-09-20 20:59:25.037702206Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3c0d78c-ba88-41c5-9a81-5bc89dbf8f2e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 20:59:25 addons-628754 crio[661]: time="2024-09-20 20:59:25.038262609Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb0e1c83764c66fa21b6e9b73bfc023553eff9aad817efa3006d65594bd33f01,PodSandboxId:8aca549d11d28b3b06bbedc4d78049c9a5571762eb6c5aa3a77e8cb995d60e62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726865957019764543,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db159a66-031e-40fe-aee0-8b2be90d02fc,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6e90ddc80f9e2033755b52a499b77eb67551eebdc520954077be24aee474ce,PodSandboxId:4e0a25a9933fefc6854572a5be419b41c6bc0aa70ac27fa1dd4f20cee506c738,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726865405428414027,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-cbzrf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: a711db4d-67b7-4587-a77a-7b46017cc5ac,},Annotations:map[string]string{io.kubernetes.container.has
h: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c0ddf88e508d83be96a152a511a0ff6eed1c3737c08b137eaad5e7461c0d22,PodSandboxId:24e6aeb84bf650647c41c7d37352d6a4edae500af4c8a79035dc8293029551b1,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a80c8fd6e52292d38d4e58453f310d612da59d802a3b62f4b88a21c50178f7ab,State:CONTAINER_RUNNING,CreatedAt:1726865401710012255,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-bc57996ff-27www,io.kubernetes.pod.namespace: ingress-nginx
,io.kubernetes.pod.uid: 5e6262e3-eabd-42e7-8d63-a38f9521aa66,},Annotations:map[string]string{io.kubernetes.container.hash: bbf80d3,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:beed3a84c7c5ea044cf0bc8e69b00e19e95db3c686a0cf234c4f1e5a1478bc51,PodSandboxId:e4eddcad270f7168a7cbe391b8284a43a3c041dc33b5ec5b1dfb14f46b2d7abb,Metadata:&ContainerMetadata{Name:patch,Attempt:2,},Image:&ImageSpec{Image:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,Annotations:map[string]string{},UserSpec
ifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726865387746402914,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-gkvvq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 865d12b2-b990-48b4-aef4-53657d94a027,},Annotations:map[string]string{io.kubernetes.container.hash: eb970c83,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:548bcb0e03044a386849ff2b8b2249f66b0d98559e5ccce6be5e82c3915d97e8,PodSandboxId:47fc5b3b604b3657acb7c83fb00146686e52d7acf1441a7b1fc20b0b4e115361,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6a
f68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726865376054581309,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-nsvqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2d4d57b-92af-4450-b3ea-1de9e09634c0,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7eacbc7a27b8fbcf6b6d3171788408c55d4743ad098922818bb9e5bd37fdbdd,PodSandboxId:9117ebff270f082fe7d3fdeb4a5bd66ac7ca729ae4ffaa1af5abd1c76bd1ab24,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sh
a256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242,State:CONTAINER_EXITED,CreatedAt:1726865375952988744,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-n65gg,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 77866133-b28c-4906-88f8-f0ccc0eedfc5,},Annotations:map[string]string{io.kubernetes.container.hash: c5cfc092,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77071f61ac18fbe59412b0139def81c69b5b2dce121b97c58bc4182031dd183f,PodSandboxId:bb30536452af1e7dc849122f196840e25788907bae6c8646c9b8dc239b255c62,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{I
mage:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1726865375829719728,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-56fcc65765-nnfk8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 94e58c25-9906-4f3f-a71d-3fce5e41f93f,},Annotations:map[string]string{io.kubernetes.container.hash: b7d21815,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc78aa165fe91eaab2a361445ee07d7606872b7b1ff4a213fcad68d6e1d7c19,PodSandboxId:1e9f678a16f9ca9e6fc4f0510597ac4c5de929834cfc75219d3509a402b94a93,Metadata:&Contai
nerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726865362797068214,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-jcnct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19e3126-c1ed-4af9-b7ab-80e6e4a06949,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2acd221f2863f5ce0
3bd56c3f24bcb9c01bc212bff2aa506d822d3f20c4916d,PodSandboxId:687656c9ac924d28f8236ff437a8955cb43e044fad87c5a67664cc7c84366717,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1726865339948335310,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e534a7bd-e2bb-49ab-a14f-28256e31640f,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26a1572da03a177fa626061edd39e2d827d6846cbe46a95753c4c076551c8c6,PodSandboxId:18ef2311ef6406b64746c392b7fe5cf08e963b93125431c9df624864a35cf02c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726865330459366347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91deb30c-ef21-4a79-94e2-db81f24ab089,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb35b11d6adb6b86b758475e8a1207a6315fa916ff1fc0f2b9bd29e7b35f1167,PodSandboxId:71537944126f02bd91aa07c5bcd66e9bb3f52a3052d80a1e159b2c41d909d6a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726865325805204455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2s7hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa81130-71b3-409f-b876-5cf2992f0699,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"
TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628b1145299a31b0e420178de8fe9f2172d1d9a0ddb8206a4eac7ff3e173af06,PodSandboxId:eda25750507000c48f7feec33451823651eb62446ef78d035c6a9e07f510517d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726865323529000977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b23ed7f-65d0-419b-b039-6526301af08b,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c22fecec8ece8b7118178c155707b750a52efe72682afae59b4ad2223c6bcd7,PodSandboxId:0c7ce0a0dbb52618cd5a51cefd71801bace99452ebe0edad062e6820d309dcf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726865312305509284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73ae49e512e88edd7b3d51422ae7430,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f47006040cb11650131683d4882089c05f3a0b291129ff34a9d14f769bfb479,PodSandboxId:a48bfd7c50b60e26fb0b82bce4e3fe965d70e2653e7c19acc60f125fa263c92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726865312294235910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95dd9bd765c15daee9bf57498638d246,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.containe
r.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6308109d7cef9336f1417a99426481f325df44f67d2a5c8358c1db202ca4cc32,PodSandboxId:8a9d6431fae9243a48a73fa2f91d7db500980b57182afbb33257ba5a2fa67c78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726865312212359367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa63fad15c435d58f234d76f70de363b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernete
s.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d18650c60246b08f391f79236e28a5ab464c708213418917832eaaaeca31cee,PodSandboxId:7760f2e14947d769d4cf27b7ba70f433524f315b3c119d433b6ce30c0c280b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726865312172152136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42c9bd863c924c355ecd9ed32bf8b6b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.resta
rtCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3c0d78c-ba88-41c5-9a81-5bc89dbf8f2e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                         ATTEMPT             POD ID              POD
	bb0e1c83764c6       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              8 seconds ago       Running             nginx                        0                   8aca549d11d28       nginx
	0f6e90ddc80f9       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 9 minutes ago       Running             gcp-auth                     0                   4e0a25a9933fe       gcp-auth-89d5ffd79-cbzrf
	a0c0ddf88e508       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             9 minutes ago       Running             controller                   0                   24e6aeb84bf65       ingress-nginx-controller-bc57996ff-27www
	beed3a84c7c5e       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             9 minutes ago       Exited              patch                        2                   e4eddcad270f7       ingress-nginx-admission-patch-gkvvq
	548bcb0e03044       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922      9 minutes ago       Running             volume-snapshot-controller   0                   47fc5b3b604b3       snapshot-controller-56fcc65765-nsvqk
	f7eacbc7a27b8       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   9 minutes ago       Exited              create                       0                   9117ebff270f0       ingress-nginx-admission-create-n65gg
	77071f61ac18f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922      9 minutes ago       Running             volume-snapshot-controller   0                   bb30536452af1       snapshot-controller-56fcc65765-nnfk8
	7bc78aa165fe9       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago      Running             metrics-server               0                   1e9f678a16f9c       metrics-server-84c5f94fbc-jcnct
	f2acd221f2863       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns         0                   687656c9ac924       kube-ingress-dns-minikube
	d26a1572da03a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner          0                   18ef2311ef640       storage-provisioner
	eb35b11d6adb6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago      Running             coredns                      0                   71537944126f0       coredns-7c65d6cfc9-2s7hq
	628b1145299a3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             10 minutes ago      Running             kube-proxy                   0                   eda2575050700       kube-proxy-ck9m7
	8c22fecec8ece       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             10 minutes ago      Running             etcd                         0                   0c7ce0a0dbb52       etcd-addons-628754
	9f47006040cb1       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             10 minutes ago      Running             kube-scheduler               0                   a48bfd7c50b60       kube-scheduler-addons-628754
	6308109d7cef9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             10 minutes ago      Running             kube-controller-manager      0                   8a9d6431fae92       kube-controller-manager-addons-628754
	7d18650c60246       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             10 minutes ago      Running             kube-apiserver               0                   7760f2e14947d       kube-apiserver-addons-628754
	
	
	==> coredns [eb35b11d6adb6b86b758475e8a1207a6315fa916ff1fc0f2b9bd29e7b35f1167] <==
	[INFO] 127.0.0.1:39672 - 18092 "HINFO IN 1734730656473124985.126132105188895419. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.041138577s
	[INFO] 10.244.0.7:39855 - 37406 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000458004s
	[INFO] 10.244.0.7:39855 - 49690 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000827871s
	[INFO] 10.244.0.7:45470 - 13293 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000192818s
	[INFO] 10.244.0.7:45470 - 8936 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000320178s
	[INFO] 10.244.0.7:43873 - 16971 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000110387s
	[INFO] 10.244.0.7:43873 - 16457 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096404s
	[INFO] 10.244.0.7:44014 - 47350 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149624s
	[INFO] 10.244.0.7:44014 - 3832 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000332831s
	[INFO] 10.244.0.7:47949 - 2333 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000070958s
	[INFO] 10.244.0.7:47949 - 57368 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000030483s
	[INFO] 10.244.0.7:33217 - 42646 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000045557s
	[INFO] 10.244.0.7:33217 - 51860 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00021064s
	[INFO] 10.244.0.7:33663 - 16494 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081574s
	[INFO] 10.244.0.7:33663 - 7952 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000109875s
	[INFO] 10.244.0.7:41599 - 53504 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009227s
	[INFO] 10.244.0.7:41599 - 40450 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000256907s
	[INFO] 10.244.0.21:38291 - 34561 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000696839s
	[INFO] 10.244.0.21:37047 - 19984 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000286736s
	[INFO] 10.244.0.21:48670 - 59512 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000103607s
	[INFO] 10.244.0.21:42040 - 13282 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079655s
	[INFO] 10.244.0.21:49121 - 13348 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082438s
	[INFO] 10.244.0.21:43215 - 5085 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000080693s
	[INFO] 10.244.0.21:57332 - 41951 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000645949s
	[INFO] 10.244.0.21:51212 - 53235 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001099281s
	
	
	==> describe nodes <==
	Name:               addons-628754
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-628754
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=addons-628754
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T20_48_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-628754
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 20:48:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-628754
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 20:59:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 20:59:10 +0000   Fri, 20 Sep 2024 20:48:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 20:59:10 +0000   Fri, 20 Sep 2024 20:48:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 20:59:10 +0000   Fri, 20 Sep 2024 20:48:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 20:59:10 +0000   Fri, 20 Sep 2024 20:48:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.204
	  Hostname:    addons-628754
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7c9184a6c7a4f059f8bfecc27a38bef
	  System UUID:                c7c9184a-6c7a-4f05-9f8b-fecc27a38bef
	  Boot ID:                    cf51e733-8187-478e-b6b3-2f37c0eadb57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  gcp-auth                    gcp-auth-89d5ffd79-cbzrf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-27www    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-2s7hq                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     10m
	  kube-system                 etcd-addons-628754                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         10m
	  kube-system                 kube-apiserver-addons-628754                250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-628754       200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-ck9m7                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-628754                100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-jcnct             100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node addons-628754 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node addons-628754 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node addons-628754 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node addons-628754 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node addons-628754 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node addons-628754 status is now: NodeHasSufficientPID
	  Normal  NodeReady                10m                kubelet          Node addons-628754 status is now: NodeReady
	  Normal  RegisteredNode           10m                node-controller  Node addons-628754 event: Registered Node addons-628754 in Controller
	
	
	==> dmesg <==
	[  +0.158360] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.019434] kauditd_printk_skb: 115 callbacks suppressed
	[  +5.067774] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.008250] kauditd_printk_skb: 111 callbacks suppressed
	[Sep20 20:49] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.153986] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.168012] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.152868] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.310325] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.091709] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.646153] kauditd_printk_skb: 6 callbacks suppressed
	[Sep20 20:50] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.211672] kauditd_printk_skb: 54 callbacks suppressed
	[  +9.022690] kauditd_printk_skb: 4 callbacks suppressed
	[Sep20 20:51] kauditd_printk_skb: 30 callbacks suppressed
	[Sep20 20:52] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 20:55] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 20:58] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.397806] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.278210] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.586919] kauditd_printk_skb: 56 callbacks suppressed
	[  +8.269165] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.002687] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 20:59] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.497306] kauditd_printk_skb: 54 callbacks suppressed
	
	
	==> etcd [8c22fecec8ece8b7118178c155707b750a52efe72682afae59b4ad2223c6bcd7] <==
	{"level":"info","ts":"2024-09-20T20:49:59.376221Z","caller":"traceutil/trace.go:171","msg":"trace[1072542918] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1134; }","duration":"319.965762ms","start":"2024-09-20T20:49:59.056242Z","end":"2024-09-20T20:49:59.376208Z","steps":["trace[1072542918] 'agreement among raft nodes before linearized reading'  (duration: 318.885736ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:49:59.376383Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T20:49:59.056200Z","time spent":"320.170995ms","remote":"127.0.0.1:36416","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-09-20T20:49:59.565189Z","caller":"traceutil/trace.go:171","msg":"trace[419852999] linearizableReadLoop","detail":"{readStateIndex:1164; appliedIndex:1163; }","duration":"137.143107ms","start":"2024-09-20T20:49:59.428033Z","end":"2024-09-20T20:49:59.565176Z","steps":["trace[419852999] 'read index received'  (duration: 137.009045ms)","trace[419852999] 'applied index is now lower than readState.Index'  (duration: 133.69µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T20:49:59.565464Z","caller":"traceutil/trace.go:171","msg":"trace[581102703] transaction","detail":"{read_only:false; response_revision:1135; number_of_response:1; }","duration":"184.808174ms","start":"2024-09-20T20:49:59.380645Z","end":"2024-09-20T20:49:59.565453Z","steps":["trace[581102703] 'process raft request'  (duration: 184.43691ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:49:59.565524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.475086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T20:49:59.566221Z","caller":"traceutil/trace.go:171","msg":"trace[268053702] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"138.182795ms","start":"2024-09-20T20:49:59.428029Z","end":"2024-09-20T20:49:59.566212Z","steps":["trace[268053702] 'agreement among raft nodes before linearized reading'  (duration: 137.45984ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:58:17.859241Z","caller":"traceutil/trace.go:171","msg":"trace[158193461] linearizableReadLoop","detail":"{readStateIndex:2108; appliedIndex:2107; }","duration":"270.251065ms","start":"2024-09-20T20:58:17.588968Z","end":"2024-09-20T20:58:17.859219Z","steps":["trace[158193461] 'read index received'  (duration: 270.029869ms)","trace[158193461] 'applied index is now lower than readState.Index'  (duration: 220.692µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T20:58:17.859567Z","caller":"traceutil/trace.go:171","msg":"trace[251460462] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1971; }","duration":"294.826658ms","start":"2024-09-20T20:58:17.564730Z","end":"2024-09-20T20:58:17.859556Z","steps":["trace[251460462] 'process raft request'  (duration: 294.306874ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:17.859895Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.341119ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-09-20T20:58:17.860684Z","caller":"traceutil/trace.go:171","msg":"trace[2086911049] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1971; }","duration":"217.238418ms","start":"2024-09-20T20:58:17.643435Z","end":"2024-09-20T20:58:17.860673Z","steps":["trace[2086911049] 'agreement among raft nodes before linearized reading'  (duration: 216.223347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:17.859984Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.032072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:1 size:883"}
	{"level":"info","ts":"2024-09-20T20:58:17.861132Z","caller":"traceutil/trace.go:171","msg":"trace[259109003] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:1; response_revision:1971; }","duration":"272.17394ms","start":"2024-09-20T20:58:17.588947Z","end":"2024-09-20T20:58:17.861121Z","steps":["trace[259109003] 'agreement among raft nodes before linearized reading'  (duration: 270.978028ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:17.860226Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.24562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T20:58:17.861502Z","caller":"traceutil/trace.go:171","msg":"trace[1923698082] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1971; }","duration":"150.517679ms","start":"2024-09-20T20:58:17.710976Z","end":"2024-09-20T20:58:17.861494Z","steps":["trace[1923698082] 'agreement among raft nodes before linearized reading'  (duration: 149.237948ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:17.860256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.092261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T20:58:17.864904Z","caller":"traceutil/trace.go:171","msg":"trace[1341777283] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1971; }","duration":"166.743439ms","start":"2024-09-20T20:58:17.698148Z","end":"2024-09-20T20:58:17.864891Z","steps":["trace[1341777283] 'agreement among raft nodes before linearized reading'  (duration: 162.088037ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:58:33.349055Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1548}
	{"level":"info","ts":"2024-09-20T20:58:33.390006Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1548,"took":"40.358531ms","hash":3091395663,"current-db-size-bytes":7442432,"current-db-size":"7.4 MB","current-db-size-in-use-bytes":4321280,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-09-20T20:58:33.390117Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3091395663,"revision":1548,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T20:58:40.291216Z","caller":"traceutil/trace.go:171","msg":"trace[1233032017] transaction","detail":"{read_only:false; response_revision:2184; number_of_response:1; }","duration":"102.579104ms","start":"2024-09-20T20:58:40.188615Z","end":"2024-09-20T20:58:40.291194Z","steps":["trace[1233032017] 'process raft request'  (duration: 102.479076ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:58:41.625494Z","caller":"traceutil/trace.go:171","msg":"trace[1492248117] transaction","detail":"{read_only:false; response_revision:2186; number_of_response:1; }","duration":"359.734741ms","start":"2024-09-20T20:58:41.265745Z","end":"2024-09-20T20:58:41.625480Z","steps":["trace[1492248117] 'process raft request'  (duration: 359.639083ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:41.625688Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T20:58:41.265729Z","time spent":"359.847904ms","remote":"127.0.0.1:36536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:2163 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:450 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-09-20T20:58:41.635221Z","caller":"traceutil/trace.go:171","msg":"trace[2009681851] transaction","detail":"{read_only:false; response_revision:2188; number_of_response:1; }","duration":"287.660276ms","start":"2024-09-20T20:58:41.347543Z","end":"2024-09-20T20:58:41.635203Z","steps":["trace[2009681851] 'process raft request'  (duration: 287.627001ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:58:41.635333Z","caller":"traceutil/trace.go:171","msg":"trace[230871389] transaction","detail":"{read_only:false; response_revision:2187; number_of_response:1; }","duration":"337.317765ms","start":"2024-09-20T20:58:41.298010Z","end":"2024-09-20T20:58:41.635327Z","steps":["trace[230871389] 'process raft request'  (duration: 337.000319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:41.635417Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T20:58:41.297992Z","time spent":"337.357119ms","remote":"127.0.0.1:36536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-628754\" mod_revision:2121 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-628754\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-628754\" > >"}
	
	
	==> gcp-auth [0f6e90ddc80f9e2033755b52a499b77eb67551eebdc520954077be24aee474ce] <==
	2024/09/20 20:50:06 Ready to write response ...
	2024/09/20 20:50:09 Ready to marshal response ...
	2024/09/20 20:50:09 Ready to write response ...
	2024/09/20 20:50:09 Ready to marshal response ...
	2024/09/20 20:50:09 Ready to write response ...
	2024/09/20 20:58:12 Ready to marshal response ...
	2024/09/20 20:58:12 Ready to write response ...
	2024/09/20 20:58:12 Ready to marshal response ...
	2024/09/20 20:58:12 Ready to write response ...
	2024/09/20 20:58:12 Ready to marshal response ...
	2024/09/20 20:58:12 Ready to write response ...
	2024/09/20 20:58:18 Ready to marshal response ...
	2024/09/20 20:58:18 Ready to write response ...
	2024/09/20 20:58:18 Ready to marshal response ...
	2024/09/20 20:58:18 Ready to write response ...
	2024/09/20 20:58:22 Ready to marshal response ...
	2024/09/20 20:58:22 Ready to write response ...
	2024/09/20 20:58:26 Ready to marshal response ...
	2024/09/20 20:58:26 Ready to write response ...
	2024/09/20 20:58:36 Ready to marshal response ...
	2024/09/20 20:58:36 Ready to write response ...
	2024/09/20 20:59:08 Ready to marshal response ...
	2024/09/20 20:59:08 Ready to write response ...
	2024/09/20 20:59:14 Ready to marshal response ...
	2024/09/20 20:59:14 Ready to write response ...
	
	
	==> kernel <==
	 20:59:25 up 11 min,  0 users,  load average: 1.14, 0.55, 0.35
	Linux addons-628754 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7d18650c60246b08f391f79236e28a5ab464c708213418917832eaaaeca31cee] <==
	W0920 20:50:29.072389       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 20:50:29.072518       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 20:50:29.073774       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.246.51:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.246.51:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.246.51:443: connect: connection refused" logger="UnhandledError"
	E0920 20:50:29.080229       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.246.51:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.246.51:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.246.51:443: connect: connection refused" logger="UnhandledError"
	I0920 20:50:29.156909       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 20:58:12.607107       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.156.22"}
	E0920 20:58:42.815240       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0920 20:58:47.912557       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 20:59:08.742368       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 20:59:09.785503       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 20:59:14.271182       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 20:59:14.472878       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.230.48"}
	I0920 20:59:24.833550       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.833614       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 20:59:24.867777       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.868426       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 20:59:24.889977       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.890007       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 20:59:24.966388       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.966424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 20:59:24.989650       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.989712       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [6308109d7cef9336f1417a99426481f325df44f67d2a5c8358c1db202ca4cc32] <==
	I0920 20:58:26.348310       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="3.466µs"
	I0920 20:58:27.505702       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="6.56µs"
	I0920 20:58:28.194869       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0920 20:58:29.913855       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="12.775µs"
	I0920 20:58:36.471323       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0920 20:58:40.294575       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-628754"
	I0920 20:59:08.084170       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	E0920 20:59:09.787334       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 20:59:10.753201       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-628754"
	W0920 20:59:11.291209       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 20:59:11.291282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 20:59:12.072656       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0920 20:59:12.073000       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 20:59:12.568220       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0920 20:59:12.568264       1 shared_informer.go:320] Caches are synced for garbage collector
	W0920 20:59:14.489902       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 20:59:14.490006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 20:59:18.013222       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0920 20:59:18.189032       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	W0920 20:59:18.587267       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 20:59:18.587359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 20:59:18.883138       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0920 20:59:19.136018       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-628754"
	I0920 20:59:23.563171       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="9.028µs"
	I0920 20:59:25.064915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="135.885µs"
	
	
	==> kube-proxy [628b1145299a31b0e420178de8fe9f2172d1d9a0ddb8206a4eac7ff3e173af06] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 20:48:44.327957       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 20:48:44.350577       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.204"]
	E0920 20:48:44.350669       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 20:48:44.445024       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 20:48:44.445092       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 20:48:44.445120       1 server_linux.go:169] "Using iptables Proxier"
	I0920 20:48:44.451041       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 20:48:44.451371       1 server.go:483] "Version info" version="v1.31.1"
	I0920 20:48:44.451402       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 20:48:44.453177       1 config.go:199] "Starting service config controller"
	I0920 20:48:44.453217       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 20:48:44.453247       1 config.go:105] "Starting endpoint slice config controller"
	I0920 20:48:44.453252       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 20:48:44.454211       1 config.go:328] "Starting node config controller"
	I0920 20:48:44.454239       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 20:48:44.554025       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 20:48:44.554067       1 shared_informer.go:320] Caches are synced for service config
	I0920 20:48:44.554904       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9f47006040cb11650131683d4882089c05f3a0b291129ff34a9d14f769bfb479] <==
	W0920 20:48:35.797909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 20:48:35.797962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:35.808611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 20:48:35.808694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:35.930646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:35.930696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:35.931077       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 20:48:35.931116       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 20:48:35.942346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 20:48:35.942395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.021065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:36.021114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.033505       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 20:48:36.033567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.050598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:36.050648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.094192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 20:48:36.094242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.113042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:36.113500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.211462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 20:48:36.211605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.341367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 20:48:36.341480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 20:48:37.917584       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 20:59:24 addons-628754 kubelet[1204]: I0920 20:59:24.033475    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5clpl\" (UniqueName: \"kubernetes.io/projected/d9752b94-3a87-4a8d-a156-b0f4baae0181-kube-api-access-5clpl\") pod \"d9752b94-3a87-4a8d-a156-b0f4baae0181\" (UID: \"d9752b94-3a87-4a8d-a156-b0f4baae0181\") "
	Sep 20 20:59:24 addons-628754 kubelet[1204]: I0920 20:59:24.038167    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19bd1a7e-1f50-4be0-b5ef-365eff973102-kube-api-access-6rwv9" (OuterVolumeSpecName: "kube-api-access-6rwv9") pod "19bd1a7e-1f50-4be0-b5ef-365eff973102" (UID: "19bd1a7e-1f50-4be0-b5ef-365eff973102"). InnerVolumeSpecName "kube-api-access-6rwv9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 20:59:24 addons-628754 kubelet[1204]: I0920 20:59:24.042387    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9752b94-3a87-4a8d-a156-b0f4baae0181-kube-api-access-5clpl" (OuterVolumeSpecName: "kube-api-access-5clpl") pod "d9752b94-3a87-4a8d-a156-b0f4baae0181" (UID: "d9752b94-3a87-4a8d-a156-b0f4baae0181"). InnerVolumeSpecName "kube-api-access-5clpl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 20:59:24 addons-628754 kubelet[1204]: I0920 20:59:24.134460    1204 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5clpl\" (UniqueName: \"kubernetes.io/projected/d9752b94-3a87-4a8d-a156-b0f4baae0181-kube-api-access-5clpl\") on node \"addons-628754\" DevicePath \"\""
	Sep 20 20:59:24 addons-628754 kubelet[1204]: I0920 20:59:24.134507    1204 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6rwv9\" (UniqueName: \"kubernetes.io/projected/19bd1a7e-1f50-4be0-b5ef-365eff973102-kube-api-access-6rwv9\") on node \"addons-628754\" DevicePath \"\""
	Sep 20 20:59:24 addons-628754 kubelet[1204]: I0920 20:59:24.838756    1204 scope.go:117] "RemoveContainer" containerID="52484418ab79688c33dbbb9183b84c0fb912067060fc4b84a9c5c86d81ca38d3"
	Sep 20 20:59:24 addons-628754 kubelet[1204]: I0920 20:59:24.950657    1204 scope.go:117] "RemoveContainer" containerID="d1476fd78d48eb6907b68a06902b76c8c9feb0674494d6c3ac5c0fc8425a4f93"
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.649833    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swccj\" (UniqueName: \"kubernetes.io/projected/e2d4d57b-92af-4450-b3ea-1de9e09634c0-kube-api-access-swccj\") pod \"e2d4d57b-92af-4450-b3ea-1de9e09634c0\" (UID: \"e2d4d57b-92af-4450-b3ea-1de9e09634c0\") "
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.649877    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf4c5\" (UniqueName: \"kubernetes.io/projected/94e58c25-9906-4f3f-a71d-3fce5e41f93f-kube-api-access-xf4c5\") pod \"94e58c25-9906-4f3f-a71d-3fce5e41f93f\" (UID: \"94e58c25-9906-4f3f-a71d-3fce5e41f93f\") "
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.653557    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94e58c25-9906-4f3f-a71d-3fce5e41f93f-kube-api-access-xf4c5" (OuterVolumeSpecName: "kube-api-access-xf4c5") pod "94e58c25-9906-4f3f-a71d-3fce5e41f93f" (UID: "94e58c25-9906-4f3f-a71d-3fce5e41f93f"). InnerVolumeSpecName "kube-api-access-xf4c5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.657008    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d4d57b-92af-4450-b3ea-1de9e09634c0-kube-api-access-swccj" (OuterVolumeSpecName: "kube-api-access-swccj") pod "e2d4d57b-92af-4450-b3ea-1de9e09634c0" (UID: "e2d4d57b-92af-4450-b3ea-1de9e09634c0"). InnerVolumeSpecName "kube-api-access-swccj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 20:59:25 addons-628754 kubelet[1204]: E0920 20:59:25.717893    1204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="aea8b93f-0c58-47be-a497-a87b509f0c6b"
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.721060    1204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19bd1a7e-1f50-4be0-b5ef-365eff973102" path="/var/lib/kubelet/pods/19bd1a7e-1f50-4be0-b5ef-365eff973102/volumes"
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.721485    1204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dfcc7a0-c8ec-4fe1-be7e-23d00c6579b1" path="/var/lib/kubelet/pods/9dfcc7a0-c8ec-4fe1-be7e-23d00c6579b1/volumes"
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.721717    1204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9752b94-3a87-4a8d-a156-b0f4baae0181" path="/var/lib/kubelet/pods/d9752b94-3a87-4a8d-a156-b0f4baae0181/volumes"
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.750734    1204 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-swccj\" (UniqueName: \"kubernetes.io/projected/e2d4d57b-92af-4450-b3ea-1de9e09634c0-kube-api-access-swccj\") on node \"addons-628754\" DevicePath \"\""
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.750761    1204 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xf4c5\" (UniqueName: \"kubernetes.io/projected/94e58c25-9906-4f3f-a71d-3fce5e41f93f-kube-api-access-xf4c5\") on node \"addons-628754\" DevicePath \"\""
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.859969    1204 scope.go:117] "RemoveContainer" containerID="77071f61ac18fbe59412b0139def81c69b5b2dce121b97c58bc4182031dd183f"
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.902542    1204 scope.go:117] "RemoveContainer" containerID="77071f61ac18fbe59412b0139def81c69b5b2dce121b97c58bc4182031dd183f"
	Sep 20 20:59:25 addons-628754 kubelet[1204]: E0920 20:59:25.903912    1204 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77071f61ac18fbe59412b0139def81c69b5b2dce121b97c58bc4182031dd183f\": container with ID starting with 77071f61ac18fbe59412b0139def81c69b5b2dce121b97c58bc4182031dd183f not found: ID does not exist" containerID="77071f61ac18fbe59412b0139def81c69b5b2dce121b97c58bc4182031dd183f"
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.904022    1204 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77071f61ac18fbe59412b0139def81c69b5b2dce121b97c58bc4182031dd183f"} err="failed to get container status \"77071f61ac18fbe59412b0139def81c69b5b2dce121b97c58bc4182031dd183f\": rpc error: code = NotFound desc = could not find container \"77071f61ac18fbe59412b0139def81c69b5b2dce121b97c58bc4182031dd183f\": container with ID starting with 77071f61ac18fbe59412b0139def81c69b5b2dce121b97c58bc4182031dd183f not found: ID does not exist"
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.904527    1204 scope.go:117] "RemoveContainer" containerID="548bcb0e03044a386849ff2b8b2249f66b0d98559e5ccce6be5e82c3915d97e8"
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.920424    1204 scope.go:117] "RemoveContainer" containerID="548bcb0e03044a386849ff2b8b2249f66b0d98559e5ccce6be5e82c3915d97e8"
	Sep 20 20:59:25 addons-628754 kubelet[1204]: E0920 20:59:25.921338    1204 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"548bcb0e03044a386849ff2b8b2249f66b0d98559e5ccce6be5e82c3915d97e8\": container with ID starting with 548bcb0e03044a386849ff2b8b2249f66b0d98559e5ccce6be5e82c3915d97e8 not found: ID does not exist" containerID="548bcb0e03044a386849ff2b8b2249f66b0d98559e5ccce6be5e82c3915d97e8"
	Sep 20 20:59:25 addons-628754 kubelet[1204]: I0920 20:59:25.921367    1204 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"548bcb0e03044a386849ff2b8b2249f66b0d98559e5ccce6be5e82c3915d97e8"} err="failed to get container status \"548bcb0e03044a386849ff2b8b2249f66b0d98559e5ccce6be5e82c3915d97e8\": rpc error: code = NotFound desc = could not find container \"548bcb0e03044a386849ff2b8b2249f66b0d98559e5ccce6be5e82c3915d97e8\": container with ID starting with 548bcb0e03044a386849ff2b8b2249f66b0d98559e5ccce6be5e82c3915d97e8 not found: ID does not exist"
	
	
	==> storage-provisioner [d26a1572da03a177fa626061edd39e2d827d6846cbe46a95753c4c076551c8c6] <==
	I0920 20:48:51.342179       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 20:48:51.371362       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 20:48:51.371432       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 20:48:51.522612       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 20:48:51.529164       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-628754_1fadd5d2-4217-4dae-b00a-947684a01256!
	I0920 20:48:51.523653       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7272c10f-2050-4885-9a02-b4fcd176512f", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-628754_1fadd5d2-4217-4dae-b00a-947684a01256 became leader
	I0920 20:48:51.731924       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-628754_1fadd5d2-4217-4dae-b00a-947684a01256!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-628754 -n addons-628754
helpers_test.go:261: (dbg) Run:  kubectl --context addons-628754 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-n65gg ingress-nginx-admission-patch-gkvvq
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-628754 describe pod busybox ingress-nginx-admission-create-n65gg ingress-nginx-admission-patch-gkvvq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-628754 describe pod busybox ingress-nginx-admission-create-n65gg ingress-nginx-admission-patch-gkvvq: exit status 1 (67.884838ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-628754/192.168.39.204
	Start Time:       Fri, 20 Sep 2024 20:50:09 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s2lkf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s2lkf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m17s                  default-scheduler  Successfully assigned default/busybox to addons-628754
	  Normal   Pulling    7m48s (x4 over 9m17s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m48s (x4 over 9m17s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m48s (x4 over 9m17s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m34s (x6 over 9m16s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m3s (x21 over 9m16s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-n65gg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gkvvq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-628754 describe pod busybox ingress-nginx-admission-create-n65gg ingress-nginx-admission-patch-gkvvq: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.84s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (151.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-628754 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-628754 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-628754 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [db159a66-031e-40fe-aee0-8b2be90d02fc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [db159a66-031e-40fe-aee0-8b2be90d02fc] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003752601s
I0920 20:59:24.537798   16785 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-628754 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.057437632s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-628754 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.39.204
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-628754 addons disable ingress-dns --alsologtostderr -v=1: (1.389567768s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-628754 addons disable ingress --alsologtostderr -v=1: (7.716812047s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-628754 -n addons-628754
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-628754 logs -n 25: (1.334241541s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-454496                                                                     | download-only-454496 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-437637                                                                     | download-only-437637 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-454496                                                                     | download-only-454496 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-545297 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | binary-mirror-545297                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41175                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-545297                                                                     | binary-mirror-545297 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| addons  | disable dashboard -p                                                                        | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | addons-628754                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | addons-628754                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-628754 --wait=true                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:50 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | -p addons-628754                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | -p addons-628754                                                                            |                      |         |         |                     |                     |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-628754 ssh cat                                                                       | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | /opt/local-path-provisioner/pvc-3e8f244a-7a2e-48ce-baa7-c2ca4ae064da_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | addons-628754                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC | 20 Sep 24 20:59 UTC |
	|         | addons-628754                                                                               |                      |         |         |                     |                     |
	| addons  | addons-628754 addons                                                                        | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC | 20 Sep 24 20:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-628754 ip                                                                            | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC | 20 Sep 24 20:59 UTC |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC | 20 Sep 24 20:59 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-628754 addons                                                                        | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC | 20 Sep 24 20:59 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-628754 ssh curl -s                                                                   | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-628754 ip                                                                            | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 21:01 UTC | 20 Sep 24 21:01 UTC |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 21:01 UTC | 20 Sep 24 21:01 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 21:01 UTC | 20 Sep 24 21:01 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:47:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:47:57.083784   17426 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:47:57.084049   17426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:57.084058   17426 out.go:358] Setting ErrFile to fd 2...
	I0920 20:47:57.084063   17426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:57.084249   17426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 20:47:57.084816   17426 out.go:352] Setting JSON to false
	I0920 20:47:57.085616   17426 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1825,"bootTime":1726863452,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:47:57.085733   17426 start.go:139] virtualization: kvm guest
	I0920 20:47:57.088022   17426 out.go:177] * [addons-628754] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 20:47:57.089697   17426 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 20:47:57.089699   17426 notify.go:220] Checking for updates...
	I0920 20:47:57.091121   17426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:47:57.092364   17426 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 20:47:57.093781   17426 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 20:47:57.095208   17426 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 20:47:57.096741   17426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 20:47:57.098321   17426 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:47:57.129581   17426 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 20:47:57.130933   17426 start.go:297] selected driver: kvm2
	I0920 20:47:57.131005   17426 start.go:901] validating driver "kvm2" against <nil>
	I0920 20:47:57.131026   17426 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 20:47:57.131969   17426 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:57.132061   17426 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 20:47:57.147486   17426 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 20:47:57.147545   17426 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 20:47:57.147780   17426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:47:57.147811   17426 cni.go:84] Creating CNI manager for ""
	I0920 20:47:57.147852   17426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 20:47:57.147859   17426 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 20:47:57.147906   17426 start.go:340] cluster config:
	{Name:addons-628754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:47:57.148009   17426 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:57.150689   17426 out.go:177] * Starting "addons-628754" primary control-plane node in "addons-628754" cluster
	I0920 20:47:57.151657   17426 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 20:47:57.151699   17426 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 20:47:57.151709   17426 cache.go:56] Caching tarball of preloaded images
	I0920 20:47:57.151783   17426 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 20:47:57.151793   17426 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 20:47:57.152110   17426 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/config.json ...
	I0920 20:47:57.152131   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/config.json: {Name:mkca579941f55a7fbb4bc1fe5edc46778dec65b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:47:57.152274   17426 start.go:360] acquireMachinesLock for addons-628754: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 20:47:57.152320   17426 start.go:364] duration metric: took 31.79µs to acquireMachinesLock for "addons-628754"
	I0920 20:47:57.152337   17426 start.go:93] Provisioning new machine with config: &{Name:addons-628754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 20:47:57.152388   17426 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 20:47:57.153857   17426 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 20:47:57.153989   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:47:57.154024   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:47:57.168714   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I0920 20:47:57.169174   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:47:57.169774   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:47:57.169793   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:47:57.170182   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:47:57.170355   17426 main.go:141] libmachine: (addons-628754) Calling .GetMachineName
	I0920 20:47:57.170487   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:47:57.170634   17426 start.go:159] libmachine.API.Create for "addons-628754" (driver="kvm2")
	I0920 20:47:57.170665   17426 client.go:168] LocalClient.Create starting
	I0920 20:47:57.170716   17426 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 20:47:57.285508   17426 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 20:47:57.498317   17426 main.go:141] libmachine: Running pre-create checks...
	I0920 20:47:57.498341   17426 main.go:141] libmachine: (addons-628754) Calling .PreCreateCheck
	I0920 20:47:57.498846   17426 main.go:141] libmachine: (addons-628754) Calling .GetConfigRaw
	I0920 20:47:57.499269   17426 main.go:141] libmachine: Creating machine...
	I0920 20:47:57.499283   17426 main.go:141] libmachine: (addons-628754) Calling .Create
	I0920 20:47:57.499443   17426 main.go:141] libmachine: (addons-628754) Creating KVM machine...
	I0920 20:47:57.500685   17426 main.go:141] libmachine: (addons-628754) DBG | found existing default KVM network
	I0920 20:47:57.501409   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.501274   17448 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0920 20:47:57.501455   17426 main.go:141] libmachine: (addons-628754) DBG | created network xml: 
	I0920 20:47:57.501475   17426 main.go:141] libmachine: (addons-628754) DBG | <network>
	I0920 20:47:57.501483   17426 main.go:141] libmachine: (addons-628754) DBG |   <name>mk-addons-628754</name>
	I0920 20:47:57.501488   17426 main.go:141] libmachine: (addons-628754) DBG |   <dns enable='no'/>
	I0920 20:47:57.501495   17426 main.go:141] libmachine: (addons-628754) DBG |   
	I0920 20:47:57.501501   17426 main.go:141] libmachine: (addons-628754) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 20:47:57.501506   17426 main.go:141] libmachine: (addons-628754) DBG |     <dhcp>
	I0920 20:47:57.501513   17426 main.go:141] libmachine: (addons-628754) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 20:47:57.501518   17426 main.go:141] libmachine: (addons-628754) DBG |     </dhcp>
	I0920 20:47:57.501522   17426 main.go:141] libmachine: (addons-628754) DBG |   </ip>
	I0920 20:47:57.501529   17426 main.go:141] libmachine: (addons-628754) DBG |   
	I0920 20:47:57.501533   17426 main.go:141] libmachine: (addons-628754) DBG | </network>
	I0920 20:47:57.501539   17426 main.go:141] libmachine: (addons-628754) DBG | 
	I0920 20:47:57.508174   17426 main.go:141] libmachine: (addons-628754) DBG | trying to create private KVM network mk-addons-628754 192.168.39.0/24...
	I0920 20:47:57.573213   17426 main.go:141] libmachine: (addons-628754) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754 ...
	I0920 20:47:57.573243   17426 main.go:141] libmachine: (addons-628754) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 20:47:57.573254   17426 main.go:141] libmachine: (addons-628754) DBG | private KVM network mk-addons-628754 192.168.39.0/24 created
	I0920 20:47:57.573273   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.573184   17448 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 20:47:57.573460   17426 main.go:141] libmachine: (addons-628754) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 20:47:57.830461   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.830307   17448 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa...
	I0920 20:47:57.903386   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.903247   17448 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/addons-628754.rawdisk...
	I0920 20:47:57.903428   17426 main.go:141] libmachine: (addons-628754) DBG | Writing magic tar header
	I0920 20:47:57.903439   17426 main.go:141] libmachine: (addons-628754) DBG | Writing SSH key tar header
	I0920 20:47:57.903449   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.903380   17448 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754 ...
	I0920 20:47:57.903466   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754
	I0920 20:47:57.903508   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754 (perms=drwx------)
	I0920 20:47:57.903526   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 20:47:57.903538   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 20:47:57.903554   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 20:47:57.903628   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 20:47:57.903663   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 20:47:57.903677   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 20:47:57.903688   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 20:47:57.903699   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 20:47:57.903713   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins
	I0920 20:47:57.903723   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home
	I0920 20:47:57.903736   17426 main.go:141] libmachine: (addons-628754) DBG | Skipping /home - not owner
	I0920 20:47:57.903755   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 20:47:57.903769   17426 main.go:141] libmachine: (addons-628754) Creating domain...
	I0920 20:47:57.904753   17426 main.go:141] libmachine: (addons-628754) define libvirt domain using xml: 
	I0920 20:47:57.904778   17426 main.go:141] libmachine: (addons-628754) <domain type='kvm'>
	I0920 20:47:57.904788   17426 main.go:141] libmachine: (addons-628754)   <name>addons-628754</name>
	I0920 20:47:57.904826   17426 main.go:141] libmachine: (addons-628754)   <memory unit='MiB'>4000</memory>
	I0920 20:47:57.904840   17426 main.go:141] libmachine: (addons-628754)   <vcpu>2</vcpu>
	I0920 20:47:57.904849   17426 main.go:141] libmachine: (addons-628754)   <features>
	I0920 20:47:57.904882   17426 main.go:141] libmachine: (addons-628754)     <acpi/>
	I0920 20:47:57.904904   17426 main.go:141] libmachine: (addons-628754)     <apic/>
	I0920 20:47:57.904935   17426 main.go:141] libmachine: (addons-628754)     <pae/>
	I0920 20:47:57.904957   17426 main.go:141] libmachine: (addons-628754)     
	I0920 20:47:57.904970   17426 main.go:141] libmachine: (addons-628754)   </features>
	I0920 20:47:57.904997   17426 main.go:141] libmachine: (addons-628754)   <cpu mode='host-passthrough'>
	I0920 20:47:57.905010   17426 main.go:141] libmachine: (addons-628754)   
	I0920 20:47:57.905023   17426 main.go:141] libmachine: (addons-628754)   </cpu>
	I0920 20:47:57.905034   17426 main.go:141] libmachine: (addons-628754)   <os>
	I0920 20:47:57.905043   17426 main.go:141] libmachine: (addons-628754)     <type>hvm</type>
	I0920 20:47:57.905052   17426 main.go:141] libmachine: (addons-628754)     <boot dev='cdrom'/>
	I0920 20:47:57.905061   17426 main.go:141] libmachine: (addons-628754)     <boot dev='hd'/>
	I0920 20:47:57.905071   17426 main.go:141] libmachine: (addons-628754)     <bootmenu enable='no'/>
	I0920 20:47:57.905078   17426 main.go:141] libmachine: (addons-628754)   </os>
	I0920 20:47:57.905087   17426 main.go:141] libmachine: (addons-628754)   <devices>
	I0920 20:47:57.905096   17426 main.go:141] libmachine: (addons-628754)     <disk type='file' device='cdrom'>
	I0920 20:47:57.905111   17426 main.go:141] libmachine: (addons-628754)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/boot2docker.iso'/>
	I0920 20:47:57.905125   17426 main.go:141] libmachine: (addons-628754)       <target dev='hdc' bus='scsi'/>
	I0920 20:47:57.905135   17426 main.go:141] libmachine: (addons-628754)       <readonly/>
	I0920 20:47:57.905150   17426 main.go:141] libmachine: (addons-628754)     </disk>
	I0920 20:47:57.905168   17426 main.go:141] libmachine: (addons-628754)     <disk type='file' device='disk'>
	I0920 20:47:57.905191   17426 main.go:141] libmachine: (addons-628754)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 20:47:57.905212   17426 main.go:141] libmachine: (addons-628754)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/addons-628754.rawdisk'/>
	I0920 20:47:57.905226   17426 main.go:141] libmachine: (addons-628754)       <target dev='hda' bus='virtio'/>
	I0920 20:47:57.905236   17426 main.go:141] libmachine: (addons-628754)     </disk>
	I0920 20:47:57.905246   17426 main.go:141] libmachine: (addons-628754)     <interface type='network'>
	I0920 20:47:57.905258   17426 main.go:141] libmachine: (addons-628754)       <source network='mk-addons-628754'/>
	I0920 20:47:57.905267   17426 main.go:141] libmachine: (addons-628754)       <model type='virtio'/>
	I0920 20:47:57.905277   17426 main.go:141] libmachine: (addons-628754)     </interface>
	I0920 20:47:57.905292   17426 main.go:141] libmachine: (addons-628754)     <interface type='network'>
	I0920 20:47:57.905308   17426 main.go:141] libmachine: (addons-628754)       <source network='default'/>
	I0920 20:47:57.905320   17426 main.go:141] libmachine: (addons-628754)       <model type='virtio'/>
	I0920 20:47:57.905335   17426 main.go:141] libmachine: (addons-628754)     </interface>
	I0920 20:47:57.905345   17426 main.go:141] libmachine: (addons-628754)     <serial type='pty'>
	I0920 20:47:57.905351   17426 main.go:141] libmachine: (addons-628754)       <target port='0'/>
	I0920 20:47:57.905362   17426 main.go:141] libmachine: (addons-628754)     </serial>
	I0920 20:47:57.905372   17426 main.go:141] libmachine: (addons-628754)     <console type='pty'>
	I0920 20:47:57.905387   17426 main.go:141] libmachine: (addons-628754)       <target type='serial' port='0'/>
	I0920 20:47:57.905404   17426 main.go:141] libmachine: (addons-628754)     </console>
	I0920 20:47:57.905414   17426 main.go:141] libmachine: (addons-628754)     <rng model='virtio'>
	I0920 20:47:57.905426   17426 main.go:141] libmachine: (addons-628754)       <backend model='random'>/dev/random</backend>
	I0920 20:47:57.905436   17426 main.go:141] libmachine: (addons-628754)     </rng>
	I0920 20:47:57.905445   17426 main.go:141] libmachine: (addons-628754)     
	I0920 20:47:57.905453   17426 main.go:141] libmachine: (addons-628754)     
	I0920 20:47:57.905463   17426 main.go:141] libmachine: (addons-628754)   </devices>
	I0920 20:47:57.905471   17426 main.go:141] libmachine: (addons-628754) </domain>
	I0920 20:47:57.905483   17426 main.go:141] libmachine: (addons-628754) 
	I0920 20:47:57.911357   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:e6:b4:6f in network default
	I0920 20:47:57.911872   17426 main.go:141] libmachine: (addons-628754) Ensuring networks are active...
	I0920 20:47:57.911889   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:47:57.912569   17426 main.go:141] libmachine: (addons-628754) Ensuring network default is active
	I0920 20:47:57.912863   17426 main.go:141] libmachine: (addons-628754) Ensuring network mk-addons-628754 is active
	I0920 20:47:57.913288   17426 main.go:141] libmachine: (addons-628754) Getting domain xml...
	I0920 20:47:57.913898   17426 main.go:141] libmachine: (addons-628754) Creating domain...
	I0920 20:47:59.326755   17426 main.go:141] libmachine: (addons-628754) Waiting to get IP...
	I0920 20:47:59.327557   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:47:59.328005   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:47:59.328033   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:59.327972   17448 retry.go:31] will retry after 302.559317ms: waiting for machine to come up
	I0920 20:47:59.632500   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:47:59.632880   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:47:59.632917   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:59.632857   17448 retry.go:31] will retry after 377.023743ms: waiting for machine to come up
	I0920 20:48:00.011524   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:00.012077   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:00.012099   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:00.012026   17448 retry.go:31] will retry after 369.422393ms: waiting for machine to come up
	I0920 20:48:00.382598   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:00.383102   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:00.383131   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:00.383057   17448 retry.go:31] will retry after 534.800716ms: waiting for machine to come up
	I0920 20:48:00.919797   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:00.920325   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:00.920348   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:00.920289   17448 retry.go:31] will retry after 493.667737ms: waiting for machine to come up
	I0920 20:48:01.416002   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:01.416381   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:01.416402   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:01.416339   17448 retry.go:31] will retry after 621.848692ms: waiting for machine to come up
	I0920 20:48:02.040290   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:02.040677   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:02.040701   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:02.040643   17448 retry.go:31] will retry after 861.306251ms: waiting for machine to come up
	I0920 20:48:02.903325   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:02.903717   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:02.903735   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:02.903678   17448 retry.go:31] will retry after 1.459954527s: waiting for machine to come up
	I0920 20:48:04.365414   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:04.365855   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:04.365878   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:04.365791   17448 retry.go:31] will retry after 1.358721041s: waiting for machine to come up
	I0920 20:48:05.726477   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:05.726947   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:05.726992   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:05.726912   17448 retry.go:31] will retry after 1.859066738s: waiting for machine to come up
	I0920 20:48:07.588072   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:07.588572   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:07.588597   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:07.588527   17448 retry.go:31] will retry after 2.363120827s: waiting for machine to come up
	I0920 20:48:09.953935   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:09.954415   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:09.954440   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:09.954387   17448 retry.go:31] will retry after 2.276156744s: waiting for machine to come up
	I0920 20:48:12.232531   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:12.232978   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:12.233000   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:12.232935   17448 retry.go:31] will retry after 3.494675482s: waiting for machine to come up
	I0920 20:48:15.728788   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:15.729264   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:15.729297   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:15.729233   17448 retry.go:31] will retry after 3.623474403s: waiting for machine to come up
	I0920 20:48:19.353890   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.354300   17426 main.go:141] libmachine: (addons-628754) Found IP for machine: 192.168.39.204
	I0920 20:48:19.354318   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has current primary IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.354324   17426 main.go:141] libmachine: (addons-628754) Reserving static IP address...
	I0920 20:48:19.354718   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find host DHCP lease matching {name: "addons-628754", mac: "52:54:00:59:f8:c3", ip: "192.168.39.204"} in network mk-addons-628754
	I0920 20:48:19.430989   17426 main.go:141] libmachine: (addons-628754) DBG | Getting to WaitForSSH function...
	I0920 20:48:19.431022   17426 main.go:141] libmachine: (addons-628754) Reserved static IP address: 192.168.39.204
	I0920 20:48:19.431035   17426 main.go:141] libmachine: (addons-628754) Waiting for SSH to be available...
	I0920 20:48:19.433368   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.433729   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.433754   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.433893   17426 main.go:141] libmachine: (addons-628754) DBG | Using SSH client type: external
	I0920 20:48:19.433951   17426 main.go:141] libmachine: (addons-628754) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa (-rw-------)
	I0920 20:48:19.433990   17426 main.go:141] libmachine: (addons-628754) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.204 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 20:48:19.434007   17426 main.go:141] libmachine: (addons-628754) DBG | About to run SSH command:
	I0920 20:48:19.434034   17426 main.go:141] libmachine: (addons-628754) DBG | exit 0
	I0920 20:48:19.567000   17426 main.go:141] libmachine: (addons-628754) DBG | SSH cmd err, output: <nil>: 
	I0920 20:48:19.567274   17426 main.go:141] libmachine: (addons-628754) KVM machine creation complete!
	I0920 20:48:19.567605   17426 main.go:141] libmachine: (addons-628754) Calling .GetConfigRaw
	I0920 20:48:19.568125   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:19.568301   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:19.568423   17426 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 20:48:19.568447   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:19.569781   17426 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 20:48:19.569796   17426 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 20:48:19.569803   17426 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 20:48:19.569812   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:19.572230   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.572609   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.572636   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.572734   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:19.572904   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.573062   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.573263   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:19.573417   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:19.573596   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:19.573606   17426 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 20:48:19.686484   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 20:48:19.686510   17426 main.go:141] libmachine: Detecting the provisioner...
	I0920 20:48:19.686520   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:19.689488   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.689939   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.689963   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.690171   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:19.690367   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.690533   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.690675   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:19.690829   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:19.691037   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:19.691051   17426 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 20:48:19.803894   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 20:48:19.803943   17426 main.go:141] libmachine: found compatible host: buildroot
	I0920 20:48:19.803949   17426 main.go:141] libmachine: Provisioning with buildroot...
	I0920 20:48:19.803959   17426 main.go:141] libmachine: (addons-628754) Calling .GetMachineName
	I0920 20:48:19.804229   17426 buildroot.go:166] provisioning hostname "addons-628754"
	I0920 20:48:19.804257   17426 main.go:141] libmachine: (addons-628754) Calling .GetMachineName
	I0920 20:48:19.804429   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:19.807930   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.808301   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.808325   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.808470   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:19.808640   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.808753   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.808855   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:19.808986   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:19.809178   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:19.809194   17426 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-628754 && echo "addons-628754" | sudo tee /etc/hostname
	I0920 20:48:19.933979   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-628754
	
	I0920 20:48:19.934010   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:19.936671   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.936966   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.937020   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.937216   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:19.937412   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.937572   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.937681   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:19.937835   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:19.938005   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:19.938027   17426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-628754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-628754/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-628754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 20:48:20.060621   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 20:48:20.060648   17426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 20:48:20.060666   17426 buildroot.go:174] setting up certificates
	I0920 20:48:20.060675   17426 provision.go:84] configureAuth start
	I0920 20:48:20.060683   17426 main.go:141] libmachine: (addons-628754) Calling .GetMachineName
	I0920 20:48:20.060932   17426 main.go:141] libmachine: (addons-628754) Calling .GetIP
	I0920 20:48:20.063555   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.063906   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.063931   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.064077   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.066288   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.066581   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.066599   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.066772   17426 provision.go:143] copyHostCerts
	I0920 20:48:20.066842   17426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 20:48:20.067014   17426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 20:48:20.067102   17426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 20:48:20.067175   17426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.addons-628754 san=[127.0.0.1 192.168.39.204 addons-628754 localhost minikube]
	I0920 20:48:20.207459   17426 provision.go:177] copyRemoteCerts
	I0920 20:48:20.207541   17426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 20:48:20.207568   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.210051   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.210438   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.210465   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.210641   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.210815   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.210936   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.211089   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:20.297517   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 20:48:20.323502   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 20:48:20.348382   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 20:48:20.372714   17426 provision.go:87] duration metric: took 312.024389ms to configureAuth
	I0920 20:48:20.372743   17426 buildroot.go:189] setting minikube options for container-runtime
	I0920 20:48:20.372974   17426 config.go:182] Loaded profile config "addons-628754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 20:48:20.373093   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.375745   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.376103   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.376129   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.376286   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.376456   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.376577   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.376741   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.376885   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:20.377069   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:20.377088   17426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 20:48:20.603620   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 20:48:20.603645   17426 main.go:141] libmachine: Checking connection to Docker...
	I0920 20:48:20.603654   17426 main.go:141] libmachine: (addons-628754) Calling .GetURL
	I0920 20:48:20.605014   17426 main.go:141] libmachine: (addons-628754) DBG | Using libvirt version 6000000
	I0920 20:48:20.607455   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.607777   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.607807   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.607965   17426 main.go:141] libmachine: Docker is up and running!
	I0920 20:48:20.607980   17426 main.go:141] libmachine: Reticulating splines...
	I0920 20:48:20.607989   17426 client.go:171] duration metric: took 23.437316144s to LocalClient.Create
	I0920 20:48:20.608018   17426 start.go:167] duration metric: took 23.437384362s to libmachine.API.Create "addons-628754"
	I0920 20:48:20.608030   17426 start.go:293] postStartSetup for "addons-628754" (driver="kvm2")
	I0920 20:48:20.608041   17426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 20:48:20.608073   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.608293   17426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 20:48:20.608319   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.610317   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.610627   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.610652   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.610781   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.610921   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.611034   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.611124   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:20.698116   17426 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 20:48:20.702771   17426 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 20:48:20.702801   17426 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 20:48:20.702873   17426 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 20:48:20.702896   17426 start.go:296] duration metric: took 94.859468ms for postStartSetup
	I0920 20:48:20.702927   17426 main.go:141] libmachine: (addons-628754) Calling .GetConfigRaw
	I0920 20:48:20.703462   17426 main.go:141] libmachine: (addons-628754) Calling .GetIP
	I0920 20:48:20.705754   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.706028   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.706043   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.706282   17426 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/config.json ...
	I0920 20:48:20.706477   17426 start.go:128] duration metric: took 23.554079183s to createHost
	I0920 20:48:20.706501   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.708765   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.709058   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.709077   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.709229   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.709401   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.709532   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.709626   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.709751   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:20.709947   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:20.709959   17426 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 20:48:20.824097   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726865300.788019293
	
	I0920 20:48:20.824119   17426 fix.go:216] guest clock: 1726865300.788019293
	I0920 20:48:20.824129   17426 fix.go:229] Guest: 2024-09-20 20:48:20.788019293 +0000 UTC Remote: 2024-09-20 20:48:20.706488068 +0000 UTC m=+23.656717453 (delta=81.531225ms)
	I0920 20:48:20.824169   17426 fix.go:200] guest clock delta is within tolerance: 81.531225ms
	I0920 20:48:20.824174   17426 start.go:83] releasing machines lock for "addons-628754", held for 23.671844938s
	I0920 20:48:20.824195   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.824499   17426 main.go:141] libmachine: (addons-628754) Calling .GetIP
	I0920 20:48:20.827015   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.827394   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.827425   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.827590   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.828104   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.828268   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.828372   17426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 20:48:20.828427   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.828488   17426 ssh_runner.go:195] Run: cat /version.json
	I0920 20:48:20.828511   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.831137   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.831303   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.831481   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.831511   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.831658   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.831693   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.831734   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.831898   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.831916   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.832049   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.832060   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.832158   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.832182   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:20.832264   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:20.937268   17426 ssh_runner.go:195] Run: systemctl --version
	I0920 20:48:20.943278   17426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 20:48:21.102731   17426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 20:48:21.109306   17426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 20:48:21.109385   17426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 20:48:21.125977   17426 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 20:48:21.126016   17426 start.go:495] detecting cgroup driver to use...
	I0920 20:48:21.126077   17426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 20:48:21.141923   17426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 20:48:21.156336   17426 docker.go:217] disabling cri-docker service (if available) ...
	I0920 20:48:21.156395   17426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 20:48:21.170041   17426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 20:48:21.183756   17426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 20:48:21.294187   17426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 20:48:21.437230   17426 docker.go:233] disabling docker service ...
	I0920 20:48:21.437293   17426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 20:48:21.452740   17426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 20:48:21.465572   17426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 20:48:21.594165   17426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 20:48:21.703184   17426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 20:48:21.716853   17426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 20:48:21.735742   17426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 20:48:21.735807   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.746697   17426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 20:48:21.746783   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.758620   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.769552   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.780345   17426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 20:48:21.791319   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.802437   17426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.820563   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.831706   17426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 20:48:21.841879   17426 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 20:48:21.841934   17426 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 20:48:21.855580   17426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 20:48:21.865881   17426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:21.975704   17426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 20:48:22.076074   17426 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 20:48:22.076172   17426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 20:48:22.081028   17426 start.go:563] Will wait 60s for crictl version
	I0920 20:48:22.081104   17426 ssh_runner.go:195] Run: which crictl
	I0920 20:48:22.085024   17426 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 20:48:22.123994   17426 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 20:48:22.124131   17426 ssh_runner.go:195] Run: crio --version
	I0920 20:48:22.155608   17426 ssh_runner.go:195] Run: crio --version
	I0920 20:48:22.254304   17426 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 20:48:22.271322   17426 main.go:141] libmachine: (addons-628754) Calling .GetIP
	I0920 20:48:22.274135   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:22.274494   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:22.274519   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:22.274758   17426 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 20:48:22.279562   17426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:48:22.292693   17426 kubeadm.go:883] updating cluster {Name:addons-628754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 20:48:22.292826   17426 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 20:48:22.292872   17426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 20:48:22.328943   17426 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 20:48:22.329008   17426 ssh_runner.go:195] Run: which lz4
	I0920 20:48:22.333252   17426 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 20:48:22.337720   17426 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 20:48:22.337751   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 20:48:23.694739   17426 crio.go:462] duration metric: took 1.361518572s to copy over tarball
	I0920 20:48:23.694808   17426 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 20:48:25.883886   17426 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18904765s)
	I0920 20:48:25.883920   17426 crio.go:469] duration metric: took 2.18915492s to extract the tarball
	I0920 20:48:25.883930   17426 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 20:48:25.922070   17426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 20:48:25.969998   17426 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 20:48:25.970026   17426 cache_images.go:84] Images are preloaded, skipping loading
	I0920 20:48:25.970037   17426 kubeadm.go:934] updating node { 192.168.39.204 8443 v1.31.1 crio true true} ...
	I0920 20:48:25.970130   17426 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-628754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 20:48:25.970191   17426 ssh_runner.go:195] Run: crio config
	I0920 20:48:26.021952   17426 cni.go:84] Creating CNI manager for ""
	I0920 20:48:26.021980   17426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 20:48:26.021993   17426 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 20:48:26.022019   17426 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.204 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-628754 NodeName:addons-628754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 20:48:26.022191   17426 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-628754"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 20:48:26.022274   17426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 20:48:26.032477   17426 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 20:48:26.032538   17426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 20:48:26.042683   17426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 20:48:26.059732   17426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 20:48:26.076258   17426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0920 20:48:26.093707   17426 ssh_runner.go:195] Run: grep 192.168.39.204	control-plane.minikube.internal$ /etc/hosts
	I0920 20:48:26.097791   17426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:48:26.110716   17426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:26.236386   17426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 20:48:26.253722   17426 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754 for IP: 192.168.39.204
	I0920 20:48:26.253745   17426 certs.go:194] generating shared ca certs ...
	I0920 20:48:26.253764   17426 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.253934   17426 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 20:48:26.484930   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt ...
	I0920 20:48:26.484965   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt: {Name:mke2d7aeeb7db3e9800a1083a3ff2f86790ad6b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.485165   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key ...
	I0920 20:48:26.485179   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key: {Name:mk17ffdadb7b9aab721bb9d1c788c9a2ddb298de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.485278   17426 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 20:48:26.719056   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt ...
	I0920 20:48:26.719086   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt: {Name:mkfada96f4c7b2fd8274928ed1dbb16b11a88c17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.719248   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key ...
	I0920 20:48:26.719258   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key: {Name:mkd5ebdfcf48f4c7d47e8f4a6008f58fc89c2fa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.719323   17426 certs.go:256] generating profile certs ...
	I0920 20:48:26.719397   17426 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.key
	I0920 20:48:26.719413   17426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt with IP's: []
	I0920 20:48:26.933060   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt ...
	I0920 20:48:26.933092   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: {Name:mk91a397bc4ff329a96b4b263f0366897ac67b3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.933260   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.key ...
	I0920 20:48:26.933270   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.key: {Name:mkb7eb1952eb7eecce91764549c5ad4831a240c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.933342   17426 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key.23a8a0fd
	I0920 20:48:26.933369   17426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt.23a8a0fd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.204]
	I0920 20:48:27.039700   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt.23a8a0fd ...
	I0920 20:48:27.039729   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt.23a8a0fd: {Name:mke2e260dbaf2f40e6a6aae6719316d894241e83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:27.039890   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key.23a8a0fd ...
	I0920 20:48:27.039902   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key.23a8a0fd: {Name:mk8e28dd00075b54e6ff8d4a4eca983c0a2a96b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:27.039979   17426 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt.23a8a0fd -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt
	I0920 20:48:27.040049   17426 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key.23a8a0fd -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key
	I0920 20:48:27.040093   17426 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.key
	I0920 20:48:27.040110   17426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.crt with IP's: []
	I0920 20:48:27.264693   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.crt ...
	I0920 20:48:27.264722   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.crt: {Name:mkb317a338bad2a13df21d64fc6002a23cf2c57f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:27.264890   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.key ...
	I0920 20:48:27.264900   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.key: {Name:mk1033c85ac3998dbc1def471b2b99da31018275 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:27.265070   17426 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 20:48:27.265101   17426 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 20:48:27.265124   17426 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 20:48:27.265145   17426 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 20:48:27.265681   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 20:48:27.295419   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 20:48:27.327322   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 20:48:27.354123   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 20:48:27.380181   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 20:48:27.409469   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 20:48:27.434423   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 20:48:27.459334   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 20:48:27.486398   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 20:48:27.514949   17426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 20:48:27.535057   17426 ssh_runner.go:195] Run: openssl version
	I0920 20:48:27.541556   17426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 20:48:27.552841   17426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:27.557488   17426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:27.557542   17426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:27.563591   17426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 20:48:27.574585   17426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 20:48:27.578724   17426 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 20:48:27.578767   17426 kubeadm.go:392] StartCluster: {Name:addons-628754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:48:27.578828   17426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 20:48:27.578888   17426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 20:48:27.614372   17426 cri.go:89] found id: ""
	I0920 20:48:27.614428   17426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 20:48:27.624616   17426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 20:48:27.634583   17426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 20:48:27.644397   17426 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 20:48:27.644424   17426 kubeadm.go:157] found existing configuration files:
	
	I0920 20:48:27.644475   17426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 20:48:27.654186   17426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 20:48:27.654252   17426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 20:48:27.663878   17426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 20:48:27.673440   17426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 20:48:27.673505   17426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 20:48:27.683173   17426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 20:48:27.692642   17426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 20:48:27.692697   17426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 20:48:27.703397   17426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 20:48:27.713535   17426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 20:48:27.713604   17426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 20:48:27.723779   17426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 20:48:27.782861   17426 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 20:48:27.782945   17426 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 20:48:27.906000   17426 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 20:48:27.906163   17426 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 20:48:27.906287   17426 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 20:48:27.919732   17426 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 20:48:27.997360   17426 out.go:235]   - Generating certificates and keys ...
	I0920 20:48:27.997466   17426 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 20:48:27.997553   17426 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 20:48:28.094857   17426 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 20:48:28.239479   17426 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 20:48:28.395927   17426 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 20:48:28.720325   17426 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 20:48:28.850828   17426 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 20:48:28.851063   17426 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-628754 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	I0920 20:48:28.998676   17426 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 20:48:28.998936   17426 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-628754 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	I0920 20:48:29.508367   17426 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 20:48:29.566861   17426 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 20:48:29.658573   17426 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 20:48:29.658676   17426 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 20:48:30.255811   17426 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 20:48:30.388840   17426 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 20:48:30.636127   17426 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 20:48:30.741849   17426 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 20:48:30.809198   17426 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 20:48:30.809704   17426 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 20:48:30.812136   17426 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 20:48:30.814442   17426 out.go:235]   - Booting up control plane ...
	I0920 20:48:30.814560   17426 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 20:48:30.814659   17426 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 20:48:30.814748   17426 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 20:48:30.829867   17426 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 20:48:30.836797   17426 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 20:48:30.837005   17426 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 20:48:30.959791   17426 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 20:48:30.959951   17426 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 20:48:31.461582   17426 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.116569ms
	I0920 20:48:31.461681   17426 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 20:48:36.962881   17426 kubeadm.go:310] [api-check] The API server is healthy after 5.50286333s
	I0920 20:48:36.984905   17426 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 20:48:37.018104   17426 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 20:48:37.059205   17426 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 20:48:37.059469   17426 kubeadm.go:310] [mark-control-plane] Marking the node addons-628754 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 20:48:37.076513   17426 kubeadm.go:310] [bootstrap-token] Using token: bdg12k.viyp3bffzs7386tv
	I0920 20:48:37.078129   17426 out.go:235]   - Configuring RBAC rules ...
	I0920 20:48:37.078263   17426 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 20:48:37.084820   17426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 20:48:37.097641   17426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 20:48:37.105580   17426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 20:48:37.114381   17426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 20:48:37.118116   17426 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 20:48:37.369422   17426 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 20:48:37.806370   17426 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 20:48:38.367972   17426 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 20:48:38.369871   17426 kubeadm.go:310] 
	I0920 20:48:38.369946   17426 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 20:48:38.369957   17426 kubeadm.go:310] 
	I0920 20:48:38.370072   17426 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 20:48:38.370082   17426 kubeadm.go:310] 
	I0920 20:48:38.370118   17426 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 20:48:38.370196   17426 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 20:48:38.370272   17426 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 20:48:38.370283   17426 kubeadm.go:310] 
	I0920 20:48:38.370376   17426 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 20:48:38.370400   17426 kubeadm.go:310] 
	I0920 20:48:38.370460   17426 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 20:48:38.370469   17426 kubeadm.go:310] 
	I0920 20:48:38.370530   17426 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 20:48:38.370630   17426 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 20:48:38.370742   17426 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 20:48:38.370757   17426 kubeadm.go:310] 
	I0920 20:48:38.370878   17426 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 20:48:38.370978   17426 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 20:48:38.370989   17426 kubeadm.go:310] 
	I0920 20:48:38.371071   17426 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bdg12k.viyp3bffzs7386tv \
	I0920 20:48:38.371162   17426 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 20:48:38.371182   17426 kubeadm.go:310] 	--control-plane 
	I0920 20:48:38.371188   17426 kubeadm.go:310] 
	I0920 20:48:38.371275   17426 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 20:48:38.371291   17426 kubeadm.go:310] 
	I0920 20:48:38.371414   17426 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bdg12k.viyp3bffzs7386tv \
	I0920 20:48:38.371569   17426 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 20:48:38.373923   17426 kubeadm.go:310] W0920 20:48:27.749585     811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 20:48:38.374277   17426 kubeadm.go:310] W0920 20:48:27.750640     811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 20:48:38.374448   17426 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 20:48:38.374477   17426 cni.go:84] Creating CNI manager for ""
	I0920 20:48:38.374486   17426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 20:48:38.376506   17426 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 20:48:38.377736   17426 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 20:48:38.391254   17426 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 20:48:38.410759   17426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 20:48:38.410828   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:38.410916   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-628754 minikube.k8s.io/updated_at=2024_09_20T20_48_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=addons-628754 minikube.k8s.io/primary=true
	I0920 20:48:38.438031   17426 ops.go:34] apiserver oom_adj: -16
	I0920 20:48:38.537631   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:39.038754   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:39.537782   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:40.038712   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:40.538203   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:41.038038   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:41.538335   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:42.038436   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:42.537928   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:42.637542   17426 kubeadm.go:1113] duration metric: took 4.226768279s to wait for elevateKubeSystemPrivileges
	I0920 20:48:42.637585   17426 kubeadm.go:394] duration metric: took 15.05882001s to StartCluster
	I0920 20:48:42.637611   17426 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:42.637771   17426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 20:48:42.638195   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:42.638402   17426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 20:48:42.638407   17426 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 20:48:42.638474   17426 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 20:48:42.638588   17426 addons.go:69] Setting yakd=true in profile "addons-628754"
	I0920 20:48:42.638602   17426 config.go:182] Loaded profile config "addons-628754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 20:48:42.638606   17426 addons.go:69] Setting inspektor-gadget=true in profile "addons-628754"
	I0920 20:48:42.638622   17426 addons.go:69] Setting volcano=true in profile "addons-628754"
	I0920 20:48:42.638632   17426 addons.go:234] Setting addon volcano=true in "addons-628754"
	I0920 20:48:42.638613   17426 addons.go:234] Setting addon yakd=true in "addons-628754"
	I0920 20:48:42.638660   17426 addons.go:69] Setting metrics-server=true in profile "addons-628754"
	I0920 20:48:42.638668   17426 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-628754"
	I0920 20:48:42.638682   17426 addons.go:69] Setting gcp-auth=true in profile "addons-628754"
	I0920 20:48:42.638663   17426 addons.go:69] Setting cloud-spanner=true in profile "addons-628754"
	I0920 20:48:42.638683   17426 addons.go:69] Setting volumesnapshots=true in profile "addons-628754"
	I0920 20:48:42.638694   17426 addons.go:69] Setting ingress=true in profile "addons-628754"
	I0920 20:48:42.638704   17426 addons.go:234] Setting addon cloud-spanner=true in "addons-628754"
	I0920 20:48:42.638706   17426 addons.go:234] Setting addon volumesnapshots=true in "addons-628754"
	I0920 20:48:42.638709   17426 addons.go:234] Setting addon ingress=true in "addons-628754"
	I0920 20:48:42.638710   17426 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-628754"
	I0920 20:48:42.638712   17426 addons.go:69] Setting default-storageclass=true in profile "addons-628754"
	I0920 20:48:42.638726   17426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-628754"
	I0920 20:48:42.638731   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638732   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638739   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638747   17426 addons.go:69] Setting ingress-dns=true in profile "addons-628754"
	I0920 20:48:42.638759   17426 addons.go:234] Setting addon ingress-dns=true in "addons-628754"
	I0920 20:48:42.638783   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638849   17426 addons.go:69] Setting registry=true in profile "addons-628754"
	I0920 20:48:42.638879   17426 addons.go:234] Setting addon registry=true in "addons-628754"
	I0920 20:48:42.638907   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638687   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638706   17426 mustload.go:65] Loading cluster: addons-628754
	I0920 20:48:42.639189   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639194   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639204   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.638669   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.639211   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639222   17426 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-628754"
	I0920 20:48:42.639215   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639229   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639234   17426 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-628754"
	I0920 20:48:42.639239   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639259   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638657   17426 addons.go:234] Setting addon inspektor-gadget=true in "addons-628754"
	I0920 20:48:42.638623   17426 addons.go:69] Setting storage-provisioner=true in profile "addons-628754"
	I0920 20:48:42.639295   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639303   17426 addons.go:234] Setting addon storage-provisioner=true in "addons-628754"
	I0920 20:48:42.639310   17426 config.go:182] Loaded profile config "addons-628754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 20:48:42.639322   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639328   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638618   17426 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-628754"
	I0920 20:48:42.639348   17426 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-628754"
	I0920 20:48:42.639357   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.638741   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638687   17426 addons.go:234] Setting addon metrics-server=true in "addons-628754"
	I0920 20:48:42.639327   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639559   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639635   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639647   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639654   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639207   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639687   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.639669   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639692   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639689   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639712   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639774   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639778   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639792   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639796   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639925   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639957   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.640112   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.640835   17426 out.go:177] * Verifying Kubernetes components...
	I0920 20:48:42.642673   17426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:42.660941   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42189
	I0920 20:48:42.667687   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35235
	I0920 20:48:42.667692   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.667839   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.678558   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.678612   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.678621   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.678756   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.679309   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.679336   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.679311   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.679408   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.679771   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.680337   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.680363   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.680576   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.690668   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0920 20:48:42.691311   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.691872   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.691890   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.692290   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.692477   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.694008   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35761
	I0920 20:48:42.697482   17426 addons.go:234] Setting addon default-storageclass=true in "addons-628754"
	I0920 20:48:42.697526   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.697886   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.697921   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.699272   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.699624   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.699675   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.699924   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.699947   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.700602   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.700795   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.701070   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I0920 20:48:42.701245   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I0920 20:48:42.701882   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.702429   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.702446   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.702516   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41419
	I0920 20:48:42.702666   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.703662   17426 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-628754"
	I0920 20:48:42.703696   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.704048   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.704078   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.704398   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.704413   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.705415   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.705489   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.705551   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.705944   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.706011   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.706556   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.706575   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.707457   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.707477   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.708190   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.708812   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.708852   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.709335   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0920 20:48:42.709840   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.710499   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.710518   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.710878   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.711395   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.711432   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.717610   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39885
	I0920 20:48:42.718243   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.718856   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.718894   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.719298   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.719840   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.719876   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.725177   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I0920 20:48:42.725418   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I0920 20:48:42.725937   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.726018   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.726549   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.726565   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.726909   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.727514   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.727549   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.727836   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.727849   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.730675   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36279
	I0920 20:48:42.730906   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
	I0920 20:48:42.731307   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.731389   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.731827   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.731849   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.732193   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.732707   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.732748   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.735536   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.735557   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.735557   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.736097   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.736137   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.736396   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.736991   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.737017   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.744129   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0920 20:48:42.744671   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.747553   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0920 20:48:42.748011   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.749026   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.749052   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.749449   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.750408   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39107
	I0920 20:48:42.750584   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.750999   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.751611   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.751627   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.751764   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.751776   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.751844   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0920 20:48:42.752245   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.752314   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.752542   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.752592   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.753503   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I0920 20:48:42.753974   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.754124   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0920 20:48:42.754789   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.754806   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.754922   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.754954   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.754979   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.755347   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.755748   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.755778   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.756451   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.756504   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.756710   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.756809   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0920 20:48:42.756911   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.756956   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.756972   17426 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 20:48:42.757147   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.757160   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.757573   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.757601   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.757786   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.757848   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.757957   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.758203   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.758567   17426 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 20:48:42.758586   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 20:48:42.758609   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.759930   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.759949   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.760329   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.760522   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.762149   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0920 20:48:42.763177   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.763632   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.764071   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.764089   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.764362   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.764542   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.764721   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.764829   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.764982   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 20:48:42.765250   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.765505   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:42.765517   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:42.766293   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 20:48:42.766312   17426 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 20:48:42.766331   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.767282   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:42.767602   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:42.767617   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:42.767629   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:42.767635   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:42.768023   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.768620   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.768636   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.769033   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.769606   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.769647   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.770046   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.770055   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.770081   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.770105   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.770202   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.770375   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.770438   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:42.770446   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 20:48:42.770526   17426 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 20:48:42.770669   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.772010   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.773576   17426 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 20:48:42.774641   17426 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 20:48:42.774658   17426 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 20:48:42.774678   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.775024   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45223
	I0920 20:48:42.775556   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.776138   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.776153   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.777044   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.777261   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.777562   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38529
	I0920 20:48:42.777798   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.778169   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.778272   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.778289   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.779104   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.779270   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.779278   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.779293   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.779465   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.779657   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.779942   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.780180   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.781516   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45031
	I0920 20:48:42.781890   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.781970   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.782665   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.784310   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46705
	I0920 20:48:42.784396   17426 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 20:48:42.784455   17426 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 20:48:42.784798   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.785170   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0920 20:48:42.785585   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.785870   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.785888   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.786003   17426 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 20:48:42.786021   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 20:48:42.786040   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.786160   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.786170   17426 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 20:48:42.786174   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.786179   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 20:48:42.786192   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.786754   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.786805   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.786848   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.786871   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.787069   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.787456   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.787704   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.787764   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.788455   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I0920 20:48:42.788995   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.789603   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.789631   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.790117   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.790898   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.790930   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.790970   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.791037   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.791058   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.791396   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.791464   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.791556   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.791750   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.791781   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.791892   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.792728   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.792749   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.792788   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.792821   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.793062   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.793246   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.793367   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.794000   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 20:48:42.794771   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.795365   17426 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 20:48:42.795406   17426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 20:48:42.796390   17426 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 20:48:42.797256   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33525
	I0920 20:48:42.797777   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 20:48:42.797865   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 20:48:42.797883   17426 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 20:48:42.797902   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.799447   17426 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 20:48:42.799589   17426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:42.800122   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.801616   17426 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 20:48:42.801632   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 20:48:42.801653   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.801713   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 20:48:42.802118   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.802173   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.802806   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0920 20:48:42.803029   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.803221   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I0920 20:48:42.803404   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.803411   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.803584   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43141
	I0920 20:48:42.803663   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.803723   17426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:42.803906   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.803922   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.804042   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.804056   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.804059   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.804245   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.804521   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.804585   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.804639   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.804782   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.804793   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.804844   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.804970   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 20:48:42.805144   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.805349   17426 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 20:48:42.805330   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.805367   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 20:48:42.805386   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.805528   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.805581   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.805711   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.805727   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.806112   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.806210   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.806321   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.806672   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.806695   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.806377   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.806874   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.807029   17426 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 20:48:42.807167   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.807229   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.807513   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.808311   17426 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:48:42.808334   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 20:48:42.808352   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.809310   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 20:48:42.811107   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 20:48:42.811156   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.811412   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.811454   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.811479   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.811499   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.811515   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.812860   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.812904   17426 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 20:48:42.813048   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I0920 20:48:42.813072   17426 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 20:48:42.813465   17426 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 20:48:42.813488   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.813075   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.813656   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.814613   17426 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 20:48:42.814632   17426 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 20:48:42.814649   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.814848   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.815274   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 20:48:42.815352   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.815377   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.815679   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.815732   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.815942   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.816341   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.816374   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.816631   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.816813   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.816997   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.817143   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.817549   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 20:48:42.818037   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.818075   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.818471   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.818494   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.818631   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.818744   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 20:48:42.818758   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 20:48:42.818767   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.818776   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.818906   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.819052   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.819435   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.819491   17426 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 20:48:42.819891   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.819917   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.820019   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.820163   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.820412   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.820530   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.821660   17426 out.go:177]   - Using image docker.io/busybox:stable
	I0920 20:48:42.821707   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.822065   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.822084   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.822234   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.822379   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.822499   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.822604   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.822845   17426 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 20:48:42.822864   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 20:48:42.822891   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.825409   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.826320   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.826340   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.826490   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.826650   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.826791   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.826997   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:43.153916   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 20:48:43.168073   17426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 20:48:43.168113   17426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 20:48:43.291676   17426 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 20:48:43.291703   17426 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 20:48:43.315362   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 20:48:43.315391   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 20:48:43.316712   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 20:48:43.334248   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:48:43.348744   17426 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 20:48:43.348770   17426 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 20:48:43.355224   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 20:48:43.380424   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 20:48:43.380456   17426 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 20:48:43.394590   17426 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 20:48:43.394621   17426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 20:48:43.398170   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 20:48:43.398196   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 20:48:43.428507   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 20:48:43.436406   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 20:48:43.461703   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 20:48:43.498907   17426 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 20:48:43.498930   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 20:48:43.574065   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 20:48:43.574091   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 20:48:43.575337   17426 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 20:48:43.575356   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 20:48:43.577797   17426 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 20:48:43.577814   17426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 20:48:43.633050   17426 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 20:48:43.633074   17426 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 20:48:43.634773   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 20:48:43.634794   17426 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 20:48:43.724101   17426 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 20:48:43.724129   17426 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 20:48:43.756985   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 20:48:43.757016   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 20:48:43.773054   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 20:48:43.788001   17426 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 20:48:43.788031   17426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 20:48:43.885553   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 20:48:43.885575   17426 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 20:48:43.888832   17426 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 20:48:43.888852   17426 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 20:48:43.950930   17426 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:48:43.950977   17426 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 20:48:43.992296   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 20:48:43.992318   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 20:48:44.002913   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 20:48:44.002938   17426 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 20:48:44.086858   17426 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 20:48:44.086885   17426 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 20:48:44.087873   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 20:48:44.087888   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 20:48:44.166057   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 20:48:44.166083   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 20:48:44.250028   17426 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:44.250057   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 20:48:44.341452   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:48:44.487857   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 20:48:44.491856   17426 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 20:48:44.491882   17426 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 20:48:44.577907   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 20:48:44.577939   17426 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 20:48:44.667346   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:44.955208   17426 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 20:48:44.955243   17426 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 20:48:44.999353   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 20:48:44.999376   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 20:48:45.209150   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 20:48:45.209172   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 20:48:45.262741   17426 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 20:48:45.262773   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 20:48:45.491632   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 20:48:45.491665   17426 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 20:48:45.602976   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 20:48:45.772919   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 20:48:46.676907   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.522947186s)
	I0920 20:48:46.676970   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:46.676965   17426 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.50885438s)
	I0920 20:48:46.676983   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:46.676991   17426 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.508845802s)
	I0920 20:48:46.677011   17426 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 20:48:46.677250   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:46.677312   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:46.677351   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:46.677279   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:46.677361   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:46.677562   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:46.677575   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:46.677980   17426 node_ready.go:35] waiting up to 6m0s for node "addons-628754" to be "Ready" ...
	I0920 20:48:46.718417   17426 node_ready.go:49] node "addons-628754" has status "Ready":"True"
	I0920 20:48:46.718446   17426 node_ready.go:38] duration metric: took 40.437292ms for node "addons-628754" to be "Ready" ...
	I0920 20:48:46.718457   17426 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:48:46.792590   17426 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:47.364664   17426 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-628754" context rescaled to 1 replicas
	I0920 20:48:48.649858   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.333111598s)
	I0920 20:48:48.649919   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.649931   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.649939   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.315657166s)
	I0920 20:48:48.649974   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.649986   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650035   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.294786194s)
	I0920 20:48:48.650101   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.650114   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650278   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.650325   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.650395   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.650409   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.650420   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650365   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.650483   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.650494   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.650502   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650281   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.650425   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.650655   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.650663   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.650670   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650458   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.650706   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.650736   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.650743   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.650777   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.652127   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.652143   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.652308   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.652326   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.652333   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.858610   17426 pod_ready.go:103] pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:48.896047   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.896071   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.896417   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.896440   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:49.836065   17426 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 20:48:49.836110   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:49.838684   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:49.839092   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:49.839121   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:49.839312   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:49.839498   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:49.839673   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:49.839822   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:50.396937   17426 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 20:48:50.507257   17426 addons.go:234] Setting addon gcp-auth=true in "addons-628754"
	I0920 20:48:50.507314   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:50.507599   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:50.507638   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:50.522912   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I0920 20:48:50.523406   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:50.523934   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:50.523954   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:50.524273   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:50.524773   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:50.524796   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:50.540243   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I0920 20:48:50.540743   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:50.541204   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:50.541230   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:50.541545   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:50.541736   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:50.543277   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:50.543496   17426 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 20:48:50.543517   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:50.546450   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:50.546948   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:50.546998   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:50.547180   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:50.547367   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:50.547520   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:50.547677   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:50.940912   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.512363058s)
	I0920 20:48:50.940953   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.504514111s)
	I0920 20:48:50.940967   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.940980   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.940984   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.940994   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941024   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.479289353s)
	I0920 20:48:50.941058   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941073   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941075   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.167991227s)
	I0920 20:48:50.941094   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941110   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941144   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.599655439s)
	I0920 20:48:50.941166   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941177   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941213   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.453326238s)
	I0920 20:48:50.941232   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941249   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941456   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941460   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941475   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941490   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941495   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941499   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941502   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941512   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941515   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941523   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941532   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941532   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941532   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941541   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941543   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941549   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941549   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941555   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941518   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941567   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941568   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941576   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941584   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941590   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941569   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941603   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941610   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941613   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941621   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941887   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941903   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941926   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941935   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941944   17426 addons.go:475] Verifying addon registry=true in "addons-628754"
	I0920 20:48:50.942201   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.942209   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.944058   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.944089   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.944096   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.944105   17426 addons.go:475] Verifying addon metrics-server=true in "addons-628754"
	I0920 20:48:50.944376   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.944388   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.944397   17426 addons.go:475] Verifying addon ingress=true in "addons-628754"
	I0920 20:48:50.944525   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.944535   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.944664   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.944693   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.944699   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.945140   17426 out.go:177] * Verifying registry addon...
	I0920 20:48:50.945952   17426 pod_ready.go:103] pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:50.946263   17426 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-628754 service yakd-dashboard -n yakd-dashboard
	
	I0920 20:48:50.947404   17426 out.go:177] * Verifying ingress addon...
	I0920 20:48:50.948365   17426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 20:48:50.949906   17426 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 20:48:51.024842   17426 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 20:48:51.024862   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:51.067642   17426 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 20:48:51.067672   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:51.189554   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:51.189583   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:51.189883   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:51.189903   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:51.189907   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:51.485820   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:51.486047   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:51.508824   17426 pod_ready.go:93] pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:51.508855   17426 pod_ready.go:82] duration metric: took 4.716232797s for pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:51.508867   17426 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:51.565560   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.898160297s)
	W0920 20:48:51.565610   17426 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 20:48:51.565618   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.96259978s)
	I0920 20:48:51.565638   17426 retry.go:31] will retry after 289.771206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 20:48:51.565665   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:51.565681   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:51.565928   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:51.565982   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:51.565995   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:51.566029   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:51.566041   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:51.566248   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:51.566263   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:51.566271   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:51.856017   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:51.967451   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:51.967668   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:52.482714   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:52.483315   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:52.577722   17426 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.034201768s)
	I0920 20:48:52.577966   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.80498182s)
	I0920 20:48:52.578008   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:52.578045   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:52.578273   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:52.578285   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:52.578293   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:52.578299   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:52.578504   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:52.578523   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:52.578537   17426 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-628754"
	I0920 20:48:52.579487   17426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:52.580427   17426 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 20:48:52.581909   17426 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 20:48:52.582957   17426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 20:48:52.583079   17426 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 20:48:52.583105   17426 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 20:48:52.594165   17426 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 20:48:52.594198   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:52.744237   17426 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 20:48:52.744290   17426 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 20:48:52.782030   17426 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 20:48:52.782055   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 20:48:52.812476   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 20:48:52.953254   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:52.955366   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:53.088530   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:53.452082   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:53.454025   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:53.518057   17426 pod_ready.go:103] pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:53.592525   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:53.952666   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:53.954796   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:54.088578   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:54.137461   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.281390873s)
	I0920 20:48:54.137517   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:54.137536   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:54.137804   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:54.137827   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:54.137837   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:54.137845   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:54.137851   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:54.138105   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:54.138122   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:54.467785   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:54.488772   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:54.542327   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.729811085s)
	I0920 20:48:54.542374   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:54.542390   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:54.542681   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:54.542702   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:54.542701   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:54.542722   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:54.542729   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:54.542936   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:54.542997   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:54.543017   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:54.544656   17426 addons.go:475] Verifying addon gcp-auth=true in "addons-628754"
	I0920 20:48:54.546479   17426 out.go:177] * Verifying gcp-auth addon...
	I0920 20:48:54.548531   17426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 20:48:54.573659   17426 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 20:48:54.573678   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:54.617981   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:54.952495   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:54.960055   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:55.061661   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:55.091945   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:55.458616   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:55.459309   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:55.516457   17426 pod_ready.go:98] pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.204 HostIPs:[{IP:192.168.39
.204}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 20:48:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 20:48:46 +0000 UTC,FinishedAt:2024-09-20 20:48:53 +0000 UTC,ContainerID:cri-o://463342b7c77627b06d1c4c605bf2b5c802584bb5e7490211a8eca01ab7d99a30,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://463342b7c77627b06d1c4c605bf2b5c802584bb5e7490211a8eca01ab7d99a30 Started:0xc0026dd950 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001e01120} {Name:kube-api-access-vqz7n MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001e01130}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 20:48:55.516485   17426 pod_ready.go:82] duration metric: took 4.007610514s for pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace to be "Ready" ...
	E0920 20:48:55.516499   17426 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.204 HostIPs:[{IP:192.168.39.204}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 20:48:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 20:48:46 +0000 UTC,FinishedAt:2024-09-20 20:48:53 +0000 UTC,ContainerID:cri-o://463342b7c77627b06d1c4c605bf2b5c802584bb5e7490211a8eca01ab7d99a30,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://463342b7c77627b06d1c4c605bf2b5c802584bb5e7490211a8eca01ab7d99a30 Started:0xc0026dd950 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001e01120} {Name:kube-api-access-vqz7n MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001e01130}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 20:48:55.516516   17426 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.520826   17426 pod_ready.go:93] pod "etcd-addons-628754" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.520844   17426 pod_ready.go:82] duration metric: took 4.317233ms for pod "etcd-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.520857   17426 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.526814   17426 pod_ready.go:93] pod "kube-apiserver-addons-628754" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.526835   17426 pod_ready.go:82] duration metric: took 5.969545ms for pod "kube-apiserver-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.526846   17426 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.534816   17426 pod_ready.go:93] pod "kube-controller-manager-addons-628754" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.534841   17426 pod_ready.go:82] duration metric: took 7.987084ms for pod "kube-controller-manager-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.534855   17426 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ck9m7" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.541017   17426 pod_ready.go:93] pod "kube-proxy-ck9m7" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.541043   17426 pod_ready.go:82] duration metric: took 6.180196ms for pod "kube-proxy-ck9m7" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.541055   17426 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.551619   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:55.593184   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:55.913365   17426 pod_ready.go:93] pod "kube-scheduler-addons-628754" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.913386   17426 pod_ready.go:82] duration metric: took 372.323338ms for pod "kube-scheduler-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.913396   17426 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.955137   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:55.955137   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:56.051699   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:56.087775   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:56.452870   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:56.454722   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:56.551855   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:56.587689   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:56.953293   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:56.955457   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:57.051952   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:57.088662   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:57.452257   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:57.454267   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:57.552093   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:57.588708   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:57.922382   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:57.954489   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:57.955101   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:58.052836   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:58.088049   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:58.453756   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:58.454264   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:58.552172   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:58.588631   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:58.956248   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:58.956561   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:59.054141   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:59.091590   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:59.452442   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:59.457148   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:59.552573   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:59.587624   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:59.958079   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:59.959212   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:00.058727   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:00.170305   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:00.426857   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:00.452423   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:00.454540   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:00.552677   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:00.588021   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:00.952150   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:00.954190   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:01.052262   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:01.087142   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:01.452176   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:01.455052   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:01.552454   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:01.587458   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:01.954352   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:01.954591   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:02.052894   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:02.088280   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:02.453416   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:02.453710   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:02.552308   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:02.587514   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:02.919214   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:02.952365   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:02.954381   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:03.052491   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:03.087251   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:03.453233   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:03.455205   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:03.552645   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:03.588778   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:03.954362   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:03.954626   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:04.052848   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:04.088290   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:04.452525   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:04.455051   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:04.553267   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:04.587782   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:04.919682   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:04.951969   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:04.956425   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:05.052517   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:05.087781   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:05.452284   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:05.454925   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:05.553478   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:05.588581   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:05.953511   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:05.954297   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:06.051947   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:06.088183   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:06.453270   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:06.454990   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:06.553191   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:06.588354   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:06.920227   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:06.954430   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:06.954572   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:07.052484   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:07.087775   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:07.452418   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:07.454832   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:07.552476   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:07.587613   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:07.952831   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:07.954609   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:08.052698   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:08.088426   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:08.452555   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:08.453857   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:08.552734   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:08.587877   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:08.921045   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:08.953289   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:08.955044   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:09.058613   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:09.088441   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:09.454326   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:09.454844   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:09.552759   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:09.587902   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:09.920620   17426 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:09.920641   17426 pod_ready.go:82] duration metric: took 14.007238973s for pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:09.920649   17426 pod_ready.go:39] duration metric: took 23.202177695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:49:09.920663   17426 api_server.go:52] waiting for apiserver process to appear ...
	I0920 20:49:09.920712   17426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:09.947981   17426 api_server.go:72] duration metric: took 27.309549406s to wait for apiserver process to appear ...
	I0920 20:49:09.948011   17426 api_server.go:88] waiting for apiserver healthz status ...
	I0920 20:49:09.948029   17426 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I0920 20:49:09.956504   17426 api_server.go:279] https://192.168.39.204:8443/healthz returned 200:
	ok
	I0920 20:49:09.958166   17426 api_server.go:141] control plane version: v1.31.1
	I0920 20:49:09.958191   17426 api_server.go:131] duration metric: took 10.173299ms to wait for apiserver health ...
	I0920 20:49:09.958199   17426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 20:49:09.958813   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:09.961812   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:09.968812   17426 system_pods.go:59] 17 kube-system pods found
	I0920 20:49:09.968841   17426 system_pods.go:61] "coredns-7c65d6cfc9-2s7hq" [5aa81130-71b3-409f-b876-5cf2992f0699] Running
	I0920 20:49:09.968849   17426 system_pods.go:61] "csi-hostpath-attacher-0" [73d3535a-c32f-4b60-98d2-fa1c3236253d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 20:49:09.968855   17426 system_pods.go:61] "csi-hostpath-resizer-0" [01d9a848-dc32-4dda-8389-fd1d30d8e2b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 20:49:09.968863   17426 system_pods.go:61] "csi-hostpathplugin-7bhtl" [db9699fc-b3cc-4b93-8d9a-4df2159cdebc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 20:49:09.968867   17426 system_pods.go:61] "etcd-addons-628754" [fed171b7-a220-4459-bf38-9c1f660badcb] Running
	I0920 20:49:09.968871   17426 system_pods.go:61] "kube-apiserver-addons-628754" [7de28a68-441a-45ba-96a9-fe4a1100728a] Running
	I0920 20:49:09.968874   17426 system_pods.go:61] "kube-controller-manager-addons-628754" [9a7780c5-87a4-49b9-b198-ffa0f09f8040] Running
	I0920 20:49:09.968877   17426 system_pods.go:61] "kube-ingress-dns-minikube" [e534a7bd-e2bb-49ab-a14f-28256e31640f] Running
	I0920 20:49:09.968880   17426 system_pods.go:61] "kube-proxy-ck9m7" [6b23ed7f-65d0-419b-b039-6526301af08b] Running
	I0920 20:49:09.968883   17426 system_pods.go:61] "kube-scheduler-addons-628754" [81fb37ae-fc86-40a5-a2b3-874ab340a481] Running
	I0920 20:49:09.968887   17426 system_pods.go:61] "metrics-server-84c5f94fbc-jcnct" [d19e3126-c1ed-4af9-b7ab-80e6e4a06949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 20:49:09.968891   17426 system_pods.go:61] "nvidia-device-plugin-daemonset-whxlq" [08b37854-27d3-47e3-b7b8-1eece2ad9368] Running
	I0920 20:49:09.968896   17426 system_pods.go:61] "registry-66c9cd494c-6dc7n" [19bd1a7e-1f50-4be0-b5ef-365eff973102] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 20:49:09.968900   17426 system_pods.go:61] "registry-proxy-75xfb" [d9752b94-3a87-4a8d-a156-b0f4baae0181] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 20:49:09.968907   17426 system_pods.go:61] "snapshot-controller-56fcc65765-nnfk8" [94e58c25-9906-4f3f-a71d-3fce5e41f93f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:09.968912   17426 system_pods.go:61] "snapshot-controller-56fcc65765-nsvqk" [e2d4d57b-92af-4450-b3ea-1de9e09634c0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:09.968915   17426 system_pods.go:61] "storage-provisioner" [91deb30c-ef21-4a79-94e2-db81f24ab089] Running
	I0920 20:49:09.968921   17426 system_pods.go:74] duration metric: took 10.716452ms to wait for pod list to return data ...
	I0920 20:49:09.968929   17426 default_sa.go:34] waiting for default service account to be created ...
	I0920 20:49:09.971550   17426 default_sa.go:45] found service account: "default"
	I0920 20:49:09.971566   17426 default_sa.go:55] duration metric: took 2.63218ms for default service account to be created ...
	I0920 20:49:09.971573   17426 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 20:49:09.979493   17426 system_pods.go:86] 17 kube-system pods found
	I0920 20:49:09.979518   17426 system_pods.go:89] "coredns-7c65d6cfc9-2s7hq" [5aa81130-71b3-409f-b876-5cf2992f0699] Running
	I0920 20:49:09.979525   17426 system_pods.go:89] "csi-hostpath-attacher-0" [73d3535a-c32f-4b60-98d2-fa1c3236253d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 20:49:09.979532   17426 system_pods.go:89] "csi-hostpath-resizer-0" [01d9a848-dc32-4dda-8389-fd1d30d8e2b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 20:49:09.979538   17426 system_pods.go:89] "csi-hostpathplugin-7bhtl" [db9699fc-b3cc-4b93-8d9a-4df2159cdebc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 20:49:09.979546   17426 system_pods.go:89] "etcd-addons-628754" [fed171b7-a220-4459-bf38-9c1f660badcb] Running
	I0920 20:49:09.979551   17426 system_pods.go:89] "kube-apiserver-addons-628754" [7de28a68-441a-45ba-96a9-fe4a1100728a] Running
	I0920 20:49:09.979555   17426 system_pods.go:89] "kube-controller-manager-addons-628754" [9a7780c5-87a4-49b9-b198-ffa0f09f8040] Running
	I0920 20:49:09.979559   17426 system_pods.go:89] "kube-ingress-dns-minikube" [e534a7bd-e2bb-49ab-a14f-28256e31640f] Running
	I0920 20:49:09.979562   17426 system_pods.go:89] "kube-proxy-ck9m7" [6b23ed7f-65d0-419b-b039-6526301af08b] Running
	I0920 20:49:09.979567   17426 system_pods.go:89] "kube-scheduler-addons-628754" [81fb37ae-fc86-40a5-a2b3-874ab340a481] Running
	I0920 20:49:09.979572   17426 system_pods.go:89] "metrics-server-84c5f94fbc-jcnct" [d19e3126-c1ed-4af9-b7ab-80e6e4a06949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 20:49:09.979575   17426 system_pods.go:89] "nvidia-device-plugin-daemonset-whxlq" [08b37854-27d3-47e3-b7b8-1eece2ad9368] Running
	I0920 20:49:09.979581   17426 system_pods.go:89] "registry-66c9cd494c-6dc7n" [19bd1a7e-1f50-4be0-b5ef-365eff973102] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 20:49:09.979585   17426 system_pods.go:89] "registry-proxy-75xfb" [d9752b94-3a87-4a8d-a156-b0f4baae0181] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 20:49:09.979595   17426 system_pods.go:89] "snapshot-controller-56fcc65765-nnfk8" [94e58c25-9906-4f3f-a71d-3fce5e41f93f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:09.979600   17426 system_pods.go:89] "snapshot-controller-56fcc65765-nsvqk" [e2d4d57b-92af-4450-b3ea-1de9e09634c0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:09.979604   17426 system_pods.go:89] "storage-provisioner" [91deb30c-ef21-4a79-94e2-db81f24ab089] Running
	I0920 20:49:09.979610   17426 system_pods.go:126] duration metric: took 8.032198ms to wait for k8s-apps to be running ...
	I0920 20:49:09.979617   17426 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 20:49:09.979655   17426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 20:49:09.995230   17426 system_svc.go:56] duration metric: took 15.606321ms WaitForService to wait for kubelet
	I0920 20:49:09.995254   17426 kubeadm.go:582] duration metric: took 27.356825963s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:49:09.995272   17426 node_conditions.go:102] verifying NodePressure condition ...
	I0920 20:49:09.999350   17426 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 20:49:09.999371   17426 node_conditions.go:123] node cpu capacity is 2
	I0920 20:49:09.999383   17426 node_conditions.go:105] duration metric: took 4.107149ms to run NodePressure ...
	I0920 20:49:09.999393   17426 start.go:241] waiting for startup goroutines ...
	I0920 20:49:10.053460   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:10.087433   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:10.454100   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:10.454346   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:10.553327   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:10.587292   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:10.953419   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:10.953751   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:11.052530   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:11.087591   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:11.452819   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:11.454012   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:11.553210   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:11.588262   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:11.954176   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:11.956729   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:12.052811   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:12.088582   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:12.452880   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:12.456247   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:12.551887   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:12.588295   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:12.954541   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:12.954986   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:13.053227   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:13.088399   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:13.453995   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:13.454604   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:13.553146   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:13.588109   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:13.952535   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:13.954871   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:14.053058   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:14.088969   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:14.453183   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:14.456277   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:14.552749   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:14.587809   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:14.952580   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:14.954915   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:15.052901   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:15.088153   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:15.452376   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:15.454450   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:15.552178   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:15.590244   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:15.952288   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:15.954887   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:16.059820   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:16.089668   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:16.453534   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:16.455331   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:16.553285   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:16.587701   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:16.952987   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:16.955195   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:17.052783   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:17.088222   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:17.453994   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:17.455805   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:17.552334   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:17.588741   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:17.951963   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:17.954638   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:18.052247   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:18.088172   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:18.453070   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:18.454583   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:18.552893   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:18.587730   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.176219   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:19.176692   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:19.177743   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.178016   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:19.452551   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:19.455179   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:19.552929   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:19.588351   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.952550   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:19.954625   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:20.052996   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:20.088357   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:20.452557   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:20.455753   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:20.551985   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:20.587862   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:20.954138   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:20.955659   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:21.053752   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:21.088273   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:21.452533   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:21.454412   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:21.551961   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:21.587902   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.213019   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.214276   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:22.214460   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:22.214725   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:22.455160   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:22.455472   17426 kapi.go:107] duration metric: took 31.507106479s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 20:49:22.554860   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:22.587605   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.959147   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:23.056319   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:23.088920   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:23.454497   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:23.552505   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:23.587969   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:23.954323   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:24.051816   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:24.088130   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:24.456274   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:24.555427   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:24.588693   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:24.955063   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:25.052728   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:25.087985   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:25.456210   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:25.552574   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:25.587488   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:25.955116   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:26.054687   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:26.087638   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:26.626115   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:26.626483   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:26.628344   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:26.954240   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:27.052307   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:27.087762   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:27.454250   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:27.552081   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:27.589592   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:27.953915   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:28.052492   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:28.088707   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:28.455291   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:28.551661   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:28.599028   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:28.956684   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:29.052143   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:29.089901   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:29.454734   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:29.555646   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:29.588017   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:29.954631   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:30.052315   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:30.089604   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:30.453995   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:30.552431   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:30.587569   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:30.955286   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:31.052545   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:31.087284   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:31.455202   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:31.551700   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:31.587298   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:31.954692   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:32.052682   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:32.093227   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:32.457841   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:32.557548   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:32.658776   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:32.954760   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:33.052426   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:33.087335   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:33.455135   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:33.552450   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:33.599114   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:33.954722   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:34.051989   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:34.087593   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:34.453655   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:34.552523   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:34.596357   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:34.955321   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:35.052589   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:35.087503   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:35.455616   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:35.552302   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:35.592075   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:35.956755   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:36.056028   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:36.157593   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:36.455543   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:36.551889   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:36.595187   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:36.955775   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:37.052828   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:37.088401   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:37.454574   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:37.552734   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:37.588058   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:37.955848   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:38.053475   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:38.087807   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:38.453781   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:38.551921   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:38.588049   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:38.956439   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:39.287727   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:39.288266   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:39.455413   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:39.556117   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:39.587698   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:39.954228   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:40.052587   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:40.088235   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:40.454685   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:40.552222   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:40.587307   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:40.965032   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:41.069064   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:41.169661   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:41.456252   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:41.551748   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:41.587763   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:41.953862   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:42.054479   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:42.090521   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:42.454259   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:42.553452   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:42.588439   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:43.003849   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:43.052220   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:43.088192   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:43.454954   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:43.552351   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:43.588323   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:43.955159   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:44.053036   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:44.090032   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:44.454513   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:44.552038   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:44.588265   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:44.955285   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:45.054003   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:45.156329   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:45.454696   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:45.553050   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:45.588047   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:45.955073   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:46.061080   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:46.159743   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:46.454017   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:46.552354   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:46.587300   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:46.954698   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:47.054845   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:47.088317   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:47.455285   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:47.552981   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:47.598550   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:47.955384   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:48.054653   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:48.088277   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:48.454990   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:48.552663   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:48.587506   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:48.954620   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:49.051691   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:49.154600   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:49.454723   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:49.553679   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:49.587997   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:49.954402   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:50.058736   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:50.090947   17426 kapi.go:107] duration metric: took 57.507986835s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 20:49:50.455270   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:50.552676   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:50.954128   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:51.051625   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:51.455727   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:51.552466   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:51.955330   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:52.052011   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:52.454834   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:52.552770   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:52.954599   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:53.055449   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:53.455037   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:53.552423   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:53.955090   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:54.053025   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:54.454346   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:54.552755   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:54.956136   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:55.053157   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:55.455144   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:55.552459   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:55.954952   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:56.054107   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:56.598482   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:56.598807   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:56.955194   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:57.054341   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:57.456555   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:57.552580   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:57.954657   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:58.052499   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:58.454614   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:58.552294   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:58.954070   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:59.054450   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:59.594930   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:59.595375   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:59.953489   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:00.052196   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:00.454898   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:00.552561   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:00.954914   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:01.055034   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:01.454793   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:01.552037   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:01.955703   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:02.054975   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:02.454984   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:02.552692   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:02.961750   17426 kapi.go:107] duration metric: took 1m12.01184163s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 20:50:03.052368   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:03.552411   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:04.052493   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:04.552884   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:05.053084   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:05.555738   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:06.052349   17426 kapi.go:107] duration metric: took 1m11.503815049s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 20:50:06.054082   17426 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-628754 cluster.
	I0920 20:50:06.055505   17426 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 20:50:06.056936   17426 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 20:50:06.058346   17426 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, storage-provisioner-rancher, metrics-server, nvidia-device-plugin, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 20:50:06.059716   17426 addons.go:510] duration metric: took 1m23.421239851s for enable addons: enabled=[ingress-dns storage-provisioner cloud-spanner storage-provisioner-rancher metrics-server nvidia-device-plugin yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 20:50:06.059768   17426 start.go:246] waiting for cluster config update ...
	I0920 20:50:06.059789   17426 start.go:255] writing updated cluster config ...
	I0920 20:50:06.060115   17426 ssh_runner.go:195] Run: rm -f paused
	I0920 20:50:06.111505   17426 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 20:50:06.113737   17426 out.go:177] * Done! kubectl is now configured to use "addons-628754" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.068712346Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866104068683125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=393e1dbd-fdd0-445d-9f59-0adf5d5266f6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.069259052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=83f8bf38-61b9-4a14-aa3b-53234700030a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.069332530Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=83f8bf38-61b9-4a14-aa3b-53234700030a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.069567070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af85242af0bce326a576c934564be8619dcae9a83392925678ddd20b7251794,PodSandboxId:f450cbc9b57312d32ca8d50792e5c398d2aadaafab354d9c718b4d1d1b2b58be,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726866095147005588,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8kfw7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c61956c-824f-487f-a9e3-d2343b2e2e99,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb0e1c83764c66fa21b6e9b73bfc023553eff9aad817efa3006d65594bd33f01,PodSandboxId:8aca549d11d28b3b06bbedc4d78049c9a5571762eb6c5aa3a77e8cb995d60e62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726865957019764543,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db159a66-031e-40fe-aee0-8b2be90d02fc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6e90ddc80f9e2033755b52a499b77eb67551eebdc520954077be24aee474ce,PodSandboxId:4e0a25a9933fefc6854572a5be419b41c6bc0aa70ac27fa1dd4f20cee506c738,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726865405428414027,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-cbzrf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a711db4d-67b7-4587-a77a-7b46017cc5ac,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc78aa165fe91eaab2a361445ee07d7606872b7b1ff4a213fcad68d6e1d7c19,PodSandboxId:1e9f678a16f9ca9e6fc4f0510597ac4c5de929834cfc75219d3509a402b94a93,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726865362797068214,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-jcnct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19e3126-c1ed-4af9-b7ab-80e6e4a06949,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26a1572da03a177fa626061edd39e2d827d6846cbe46a95753c4c076551c8c6,PodSandboxId:18ef2311ef6406b64746c392b7fe5cf08e963b93125431c9df624864a35cf02c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726865330459366347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91deb30c-ef21-4a79-94e2-db81f24ab089,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb35b11d6adb6b86b758475e8a1207a6315fa916ff1fc0f2b9bd29e7b35f1167,PodSandboxId:71537944126f02bd91aa07c5bcd66e9bb3f52a3052d80a1e159b2c41d909d6a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726865325
805204455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2s7hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa81130-71b3-409f-b876-5cf2992f0699,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628b1145299a31b0e420178de8fe9f2172d1d9a0ddb8206a4eac7ff3e173af06,PodSandboxId:eda25750507000c48f7feec33451823651eb62446ef78d035c6a9e07f510517d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726865323529000977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b23ed7f-65d0-419b-b039-6526301af08b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c22fecec8ece8b7118178c155707b750a52efe72682afae59b4ad2223c6bcd7,PodSandboxId:0c7ce0a0dbb52618cd5a51cefd71801bace99452ebe0edad062e6820d309dcf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726865312305509284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73ae49e512e88edd7b3d51422ae7430,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f47006040cb11650131683d4882089c05f3a0b291129ff34a9d14f769bfb479,PodSandboxId:a48bfd7c50b60e26fb0b82bce4e3fe965d70e2653e7c19acc60f125fa263c92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726865312294235910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95dd9bd765c15daee9bf57498638d246,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6308109d7cef9336f1417a99426481f325df44f67d2a5c8358c1db202ca4cc32,PodSandboxId:8a9d6431fae9243a48a73fa2f91d7db500980b57182afbb33257ba5a2fa67c78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726865312212359367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa63fad15c435d58f234d76f70de363b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d18650c60246b08f391f79236e28a5ab464c708213418917832eaaaeca31cee,PodSandboxId:7760f2e14947d769d4cf27b7ba70f433524f315b3c119d433b6ce30c0c280b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726865312172152136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42c9bd863c924c355ecd9ed32bf8b6b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=83f8bf38-61b9-4a14-aa3b-53234700030a name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.112909030Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5453ba61-823f-4d7e-946b-14685ec0d8a7 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.113007224Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5453ba61-823f-4d7e-946b-14685ec0d8a7 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.116215951Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c7afd2a8-97c9-4144-9a3a-c7d8fa56e3f7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.117371339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866104117339607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c7afd2a8-97c9-4144-9a3a-c7d8fa56e3f7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.118261683Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01987981-f32f-4635-8233-3cf1ebc7faaf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.118331685Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01987981-f32f-4635-8233-3cf1ebc7faaf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.118569707Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af85242af0bce326a576c934564be8619dcae9a83392925678ddd20b7251794,PodSandboxId:f450cbc9b57312d32ca8d50792e5c398d2aadaafab354d9c718b4d1d1b2b58be,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726866095147005588,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8kfw7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c61956c-824f-487f-a9e3-d2343b2e2e99,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb0e1c83764c66fa21b6e9b73bfc023553eff9aad817efa3006d65594bd33f01,PodSandboxId:8aca549d11d28b3b06bbedc4d78049c9a5571762eb6c5aa3a77e8cb995d60e62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726865957019764543,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db159a66-031e-40fe-aee0-8b2be90d02fc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6e90ddc80f9e2033755b52a499b77eb67551eebdc520954077be24aee474ce,PodSandboxId:4e0a25a9933fefc6854572a5be419b41c6bc0aa70ac27fa1dd4f20cee506c738,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726865405428414027,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-cbzrf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a711db4d-67b7-4587-a77a-7b46017cc5ac,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc78aa165fe91eaab2a361445ee07d7606872b7b1ff4a213fcad68d6e1d7c19,PodSandboxId:1e9f678a16f9ca9e6fc4f0510597ac4c5de929834cfc75219d3509a402b94a93,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726865362797068214,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-jcnct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19e3126-c1ed-4af9-b7ab-80e6e4a06949,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26a1572da03a177fa626061edd39e2d827d6846cbe46a95753c4c076551c8c6,PodSandboxId:18ef2311ef6406b64746c392b7fe5cf08e963b93125431c9df624864a35cf02c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726865330459366347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91deb30c-ef21-4a79-94e2-db81f24ab089,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb35b11d6adb6b86b758475e8a1207a6315fa916ff1fc0f2b9bd29e7b35f1167,PodSandboxId:71537944126f02bd91aa07c5bcd66e9bb3f52a3052d80a1e159b2c41d909d6a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726865325
805204455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2s7hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa81130-71b3-409f-b876-5cf2992f0699,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628b1145299a31b0e420178de8fe9f2172d1d9a0ddb8206a4eac7ff3e173af06,PodSandboxId:eda25750507000c48f7feec33451823651eb62446ef78d035c6a9e07f510517d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726865323529000977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b23ed7f-65d0-419b-b039-6526301af08b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c22fecec8ece8b7118178c155707b750a52efe72682afae59b4ad2223c6bcd7,PodSandboxId:0c7ce0a0dbb52618cd5a51cefd71801bace99452ebe0edad062e6820d309dcf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726865312305509284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73ae49e512e88edd7b3d51422ae7430,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f47006040cb11650131683d4882089c05f3a0b291129ff34a9d14f769bfb479,PodSandboxId:a48bfd7c50b60e26fb0b82bce4e3fe965d70e2653e7c19acc60f125fa263c92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726865312294235910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95dd9bd765c15daee9bf57498638d246,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6308109d7cef9336f1417a99426481f325df44f67d2a5c8358c1db202ca4cc32,PodSandboxId:8a9d6431fae9243a48a73fa2f91d7db500980b57182afbb33257ba5a2fa67c78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726865312212359367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa63fad15c435d58f234d76f70de363b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d18650c60246b08f391f79236e28a5ab464c708213418917832eaaaeca31cee,PodSandboxId:7760f2e14947d769d4cf27b7ba70f433524f315b3c119d433b6ce30c0c280b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726865312172152136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42c9bd863c924c355ecd9ed32bf8b6b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01987981-f32f-4635-8233-3cf1ebc7faaf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.155427720Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2c25c589-c04c-45aa-9cf5-a8925000c6df name=/runtime.v1.RuntimeService/Version
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.155528301Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2c25c589-c04c-45aa-9cf5-a8925000c6df name=/runtime.v1.RuntimeService/Version
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.156875359Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00c916b4-e764-4a58-916e-d3d7863ccb8f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.158161929Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866104158134856,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00c916b4-e764-4a58-916e-d3d7863ccb8f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.158905740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91eabe0b-3e76-4d0c-92c7-7e5cdd890348 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.158983322Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91eabe0b-3e76-4d0c-92c7-7e5cdd890348 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.159261842Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af85242af0bce326a576c934564be8619dcae9a83392925678ddd20b7251794,PodSandboxId:f450cbc9b57312d32ca8d50792e5c398d2aadaafab354d9c718b4d1d1b2b58be,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726866095147005588,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8kfw7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c61956c-824f-487f-a9e3-d2343b2e2e99,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb0e1c83764c66fa21b6e9b73bfc023553eff9aad817efa3006d65594bd33f01,PodSandboxId:8aca549d11d28b3b06bbedc4d78049c9a5571762eb6c5aa3a77e8cb995d60e62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726865957019764543,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db159a66-031e-40fe-aee0-8b2be90d02fc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6e90ddc80f9e2033755b52a499b77eb67551eebdc520954077be24aee474ce,PodSandboxId:4e0a25a9933fefc6854572a5be419b41c6bc0aa70ac27fa1dd4f20cee506c738,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726865405428414027,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-cbzrf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a711db4d-67b7-4587-a77a-7b46017cc5ac,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc78aa165fe91eaab2a361445ee07d7606872b7b1ff4a213fcad68d6e1d7c19,PodSandboxId:1e9f678a16f9ca9e6fc4f0510597ac4c5de929834cfc75219d3509a402b94a93,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726865362797068214,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-jcnct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19e3126-c1ed-4af9-b7ab-80e6e4a06949,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26a1572da03a177fa626061edd39e2d827d6846cbe46a95753c4c076551c8c6,PodSandboxId:18ef2311ef6406b64746c392b7fe5cf08e963b93125431c9df624864a35cf02c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726865330459366347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91deb30c-ef21-4a79-94e2-db81f24ab089,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb35b11d6adb6b86b758475e8a1207a6315fa916ff1fc0f2b9bd29e7b35f1167,PodSandboxId:71537944126f02bd91aa07c5bcd66e9bb3f52a3052d80a1e159b2c41d909d6a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726865325
805204455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2s7hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa81130-71b3-409f-b876-5cf2992f0699,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628b1145299a31b0e420178de8fe9f2172d1d9a0ddb8206a4eac7ff3e173af06,PodSandboxId:eda25750507000c48f7feec33451823651eb62446ef78d035c6a9e07f510517d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726865323529000977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b23ed7f-65d0-419b-b039-6526301af08b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c22fecec8ece8b7118178c155707b750a52efe72682afae59b4ad2223c6bcd7,PodSandboxId:0c7ce0a0dbb52618cd5a51cefd71801bace99452ebe0edad062e6820d309dcf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726865312305509284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73ae49e512e88edd7b3d51422ae7430,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f47006040cb11650131683d4882089c05f3a0b291129ff34a9d14f769bfb479,PodSandboxId:a48bfd7c50b60e26fb0b82bce4e3fe965d70e2653e7c19acc60f125fa263c92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726865312294235910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95dd9bd765c15daee9bf57498638d246,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6308109d7cef9336f1417a99426481f325df44f67d2a5c8358c1db202ca4cc32,PodSandboxId:8a9d6431fae9243a48a73fa2f91d7db500980b57182afbb33257ba5a2fa67c78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726865312212359367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa63fad15c435d58f234d76f70de363b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d18650c60246b08f391f79236e28a5ab464c708213418917832eaaaeca31cee,PodSandboxId:7760f2e14947d769d4cf27b7ba70f433524f315b3c119d433b6ce30c0c280b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726865312172152136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42c9bd863c924c355ecd9ed32bf8b6b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91eabe0b-3e76-4d0c-92c7-7e5cdd890348 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.206019435Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b1f6977-af68-4e03-bb11-022245bd4d58 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.206116077Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b1f6977-af68-4e03-bb11-022245bd4d58 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.207069232Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3f248a37-9bd3-409b-8e91-03fafdf31896 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.208251118Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866104208222142,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3f248a37-9bd3-409b-8e91-03fafdf31896 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.208772118Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=35ae922c-f817-4bba-974a-8ee34ac82857 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.208961378Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=35ae922c-f817-4bba-974a-8ee34ac82857 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:01:44 addons-628754 crio[661]: time="2024-09-20 21:01:44.209325819Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af85242af0bce326a576c934564be8619dcae9a83392925678ddd20b7251794,PodSandboxId:f450cbc9b57312d32ca8d50792e5c398d2aadaafab354d9c718b4d1d1b2b58be,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726866095147005588,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8kfw7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c61956c-824f-487f-a9e3-d2343b2e2e99,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb0e1c83764c66fa21b6e9b73bfc023553eff9aad817efa3006d65594bd33f01,PodSandboxId:8aca549d11d28b3b06bbedc4d78049c9a5571762eb6c5aa3a77e8cb995d60e62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726865957019764543,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db159a66-031e-40fe-aee0-8b2be90d02fc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6e90ddc80f9e2033755b52a499b77eb67551eebdc520954077be24aee474ce,PodSandboxId:4e0a25a9933fefc6854572a5be419b41c6bc0aa70ac27fa1dd4f20cee506c738,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726865405428414027,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-cbzrf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a711db4d-67b7-4587-a77a-7b46017cc5ac,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc78aa165fe91eaab2a361445ee07d7606872b7b1ff4a213fcad68d6e1d7c19,PodSandboxId:1e9f678a16f9ca9e6fc4f0510597ac4c5de929834cfc75219d3509a402b94a93,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726865362797068214,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-jcnct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19e3126-c1ed-4af9-b7ab-80e6e4a06949,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26a1572da03a177fa626061edd39e2d827d6846cbe46a95753c4c076551c8c6,PodSandboxId:18ef2311ef6406b64746c392b7fe5cf08e963b93125431c9df624864a35cf02c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726865330459366347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91deb30c-ef21-4a79-94e2-db81f24ab089,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb35b11d6adb6b86b758475e8a1207a6315fa916ff1fc0f2b9bd29e7b35f1167,PodSandboxId:71537944126f02bd91aa07c5bcd66e9bb3f52a3052d80a1e159b2c41d909d6a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726865325
805204455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2s7hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa81130-71b3-409f-b876-5cf2992f0699,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628b1145299a31b0e420178de8fe9f2172d1d9a0ddb8206a4eac7ff3e173af06,PodSandboxId:eda25750507000c48f7feec33451823651eb62446ef78d035c6a9e07f510517d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726865323529000977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b23ed7f-65d0-419b-b039-6526301af08b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c22fecec8ece8b7118178c155707b750a52efe72682afae59b4ad2223c6bcd7,PodSandboxId:0c7ce0a0dbb52618cd5a51cefd71801bace99452ebe0edad062e6820d309dcf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726865312305509284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73ae49e512e88edd7b3d51422ae7430,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f47006040cb11650131683d4882089c05f3a0b291129ff34a9d14f769bfb479,PodSandboxId:a48bfd7c50b60e26fb0b82bce4e3fe965d70e2653e7c19acc60f125fa263c92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726865312294235910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95dd9bd765c15daee9bf57498638d246,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6308109d7cef9336f1417a99426481f325df44f67d2a5c8358c1db202ca4cc32,PodSandboxId:8a9d6431fae9243a48a73fa2f91d7db500980b57182afbb33257ba5a2fa67c78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726865312212359367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa63fad15c435d58f234d76f70de363b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d18650c60246b08f391f79236e28a5ab464c708213418917832eaaaeca31cee,PodSandboxId:7760f2e14947d769d4cf27b7ba70f433524f315b3c119d433b6ce30c0c280b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726865312172152136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42c9bd863c924c355ecd9ed32bf8b6b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=35ae922c-f817-4bba-974a-8ee34ac82857 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7af85242af0bc       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   9 seconds ago       Running             hello-world-app           0                   f450cbc9b5731       hello-world-app-55bf9c44b4-8kfw7
	bb0e1c83764c6       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         2 minutes ago       Running             nginx                     0                   8aca549d11d28       nginx
	0f6e90ddc80f9       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            11 minutes ago      Running             gcp-auth                  0                   4e0a25a9933fe       gcp-auth-89d5ffd79-cbzrf
	7bc78aa165fe9       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   12 minutes ago      Running             metrics-server            0                   1e9f678a16f9c       metrics-server-84c5f94fbc-jcnct
	d26a1572da03a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        12 minutes ago      Running             storage-provisioner       0                   18ef2311ef640       storage-provisioner
	eb35b11d6adb6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        12 minutes ago      Running             coredns                   0                   71537944126f0       coredns-7c65d6cfc9-2s7hq
	628b1145299a3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        13 minutes ago      Running             kube-proxy                0                   eda2575050700       kube-proxy-ck9m7
	8c22fecec8ece       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        13 minutes ago      Running             etcd                      0                   0c7ce0a0dbb52       etcd-addons-628754
	9f47006040cb1       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        13 minutes ago      Running             kube-scheduler            0                   a48bfd7c50b60       kube-scheduler-addons-628754
	6308109d7cef9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        13 minutes ago      Running             kube-controller-manager   0                   8a9d6431fae92       kube-controller-manager-addons-628754
	7d18650c60246       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        13 minutes ago      Running             kube-apiserver            0                   7760f2e14947d       kube-apiserver-addons-628754
	
	
	==> coredns [eb35b11d6adb6b86b758475e8a1207a6315fa916ff1fc0f2b9bd29e7b35f1167] <==
	[INFO] 127.0.0.1:39672 - 18092 "HINFO IN 1734730656473124985.126132105188895419. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.041138577s
	[INFO] 10.244.0.7:39855 - 37406 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000458004s
	[INFO] 10.244.0.7:39855 - 49690 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000827871s
	[INFO] 10.244.0.7:45470 - 13293 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000192818s
	[INFO] 10.244.0.7:45470 - 8936 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000320178s
	[INFO] 10.244.0.7:43873 - 16971 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000110387s
	[INFO] 10.244.0.7:43873 - 16457 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096404s
	[INFO] 10.244.0.7:44014 - 47350 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149624s
	[INFO] 10.244.0.7:44014 - 3832 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000332831s
	[INFO] 10.244.0.7:47949 - 2333 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000070958s
	[INFO] 10.244.0.7:47949 - 57368 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000030483s
	[INFO] 10.244.0.7:33217 - 42646 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000045557s
	[INFO] 10.244.0.7:33217 - 51860 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00021064s
	[INFO] 10.244.0.7:33663 - 16494 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081574s
	[INFO] 10.244.0.7:33663 - 7952 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000109875s
	[INFO] 10.244.0.7:41599 - 53504 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009227s
	[INFO] 10.244.0.7:41599 - 40450 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000256907s
	[INFO] 10.244.0.21:38291 - 34561 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000696839s
	[INFO] 10.244.0.21:37047 - 19984 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000286736s
	[INFO] 10.244.0.21:48670 - 59512 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000103607s
	[INFO] 10.244.0.21:42040 - 13282 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079655s
	[INFO] 10.244.0.21:49121 - 13348 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082438s
	[INFO] 10.244.0.21:43215 - 5085 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000080693s
	[INFO] 10.244.0.21:57332 - 41951 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000645949s
	[INFO] 10.244.0.21:51212 - 53235 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001099281s
	
	
	==> describe nodes <==
	Name:               addons-628754
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-628754
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=addons-628754
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T20_48_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-628754
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 20:48:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-628754
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:01:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:01:43 +0000   Fri, 20 Sep 2024 20:48:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:01:43 +0000   Fri, 20 Sep 2024 20:48:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:01:43 +0000   Fri, 20 Sep 2024 20:48:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:01:43 +0000   Fri, 20 Sep 2024 20:48:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.204
	  Hostname:    addons-628754
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7c9184a6c7a4f059f8bfecc27a38bef
	  System UUID:                c7c9184a-6c7a-4f05-9f8b-fecc27a38bef
	  Boot ID:                    cf51e733-8187-478e-b6b3-2f37c0eadb57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-8kfw7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  gcp-auth                    gcp-auth-89d5ffd79-cbzrf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-2s7hq                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     13m
	  kube-system                 etcd-addons-628754                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         13m
	  kube-system                 kube-apiserver-addons-628754             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-628754    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-ck9m7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-628754             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-jcnct          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-628754 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-628754 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-628754 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node addons-628754 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node addons-628754 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node addons-628754 status is now: NodeHasSufficientPID
	  Normal  NodeReady                13m                kubelet          Node addons-628754 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node addons-628754 event: Registered Node addons-628754 in Controller
	
	
	==> dmesg <==
	[  +5.067774] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.008250] kauditd_printk_skb: 111 callbacks suppressed
	[Sep20 20:49] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.153986] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.168012] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.152868] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.310325] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.091709] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.646153] kauditd_printk_skb: 6 callbacks suppressed
	[Sep20 20:50] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.211672] kauditd_printk_skb: 54 callbacks suppressed
	[  +9.022690] kauditd_printk_skb: 4 callbacks suppressed
	[Sep20 20:51] kauditd_printk_skb: 30 callbacks suppressed
	[Sep20 20:52] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 20:55] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 20:58] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.397806] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.278210] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.586919] kauditd_printk_skb: 56 callbacks suppressed
	[  +8.269165] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.002687] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 20:59] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.497306] kauditd_printk_skb: 54 callbacks suppressed
	[Sep20 21:01] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.371167] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [8c22fecec8ece8b7118178c155707b750a52efe72682afae59b4ad2223c6bcd7] <==
	{"level":"info","ts":"2024-09-20T20:49:59.376221Z","caller":"traceutil/trace.go:171","msg":"trace[1072542918] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1134; }","duration":"319.965762ms","start":"2024-09-20T20:49:59.056242Z","end":"2024-09-20T20:49:59.376208Z","steps":["trace[1072542918] 'agreement among raft nodes before linearized reading'  (duration: 318.885736ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:49:59.376383Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T20:49:59.056200Z","time spent":"320.170995ms","remote":"127.0.0.1:36416","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1136,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"info","ts":"2024-09-20T20:49:59.565189Z","caller":"traceutil/trace.go:171","msg":"trace[419852999] linearizableReadLoop","detail":"{readStateIndex:1164; appliedIndex:1163; }","duration":"137.143107ms","start":"2024-09-20T20:49:59.428033Z","end":"2024-09-20T20:49:59.565176Z","steps":["trace[419852999] 'read index received'  (duration: 137.009045ms)","trace[419852999] 'applied index is now lower than readState.Index'  (duration: 133.69µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T20:49:59.565464Z","caller":"traceutil/trace.go:171","msg":"trace[581102703] transaction","detail":"{read_only:false; response_revision:1135; number_of_response:1; }","duration":"184.808174ms","start":"2024-09-20T20:49:59.380645Z","end":"2024-09-20T20:49:59.565453Z","steps":["trace[581102703] 'process raft request'  (duration: 184.43691ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:49:59.565524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.475086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T20:49:59.566221Z","caller":"traceutil/trace.go:171","msg":"trace[268053702] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"138.182795ms","start":"2024-09-20T20:49:59.428029Z","end":"2024-09-20T20:49:59.566212Z","steps":["trace[268053702] 'agreement among raft nodes before linearized reading'  (duration: 137.45984ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:58:17.859241Z","caller":"traceutil/trace.go:171","msg":"trace[158193461] linearizableReadLoop","detail":"{readStateIndex:2108; appliedIndex:2107; }","duration":"270.251065ms","start":"2024-09-20T20:58:17.588968Z","end":"2024-09-20T20:58:17.859219Z","steps":["trace[158193461] 'read index received'  (duration: 270.029869ms)","trace[158193461] 'applied index is now lower than readState.Index'  (duration: 220.692µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T20:58:17.859567Z","caller":"traceutil/trace.go:171","msg":"trace[251460462] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1971; }","duration":"294.826658ms","start":"2024-09-20T20:58:17.564730Z","end":"2024-09-20T20:58:17.859556Z","steps":["trace[251460462] 'process raft request'  (duration: 294.306874ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:17.859895Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.341119ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-09-20T20:58:17.860684Z","caller":"traceutil/trace.go:171","msg":"trace[2086911049] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1971; }","duration":"217.238418ms","start":"2024-09-20T20:58:17.643435Z","end":"2024-09-20T20:58:17.860673Z","steps":["trace[2086911049] 'agreement among raft nodes before linearized reading'  (duration: 216.223347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:17.859984Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.032072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:1 size:883"}
	{"level":"info","ts":"2024-09-20T20:58:17.861132Z","caller":"traceutil/trace.go:171","msg":"trace[259109003] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:1; response_revision:1971; }","duration":"272.17394ms","start":"2024-09-20T20:58:17.588947Z","end":"2024-09-20T20:58:17.861121Z","steps":["trace[259109003] 'agreement among raft nodes before linearized reading'  (duration: 270.978028ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:17.860226Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.24562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T20:58:17.861502Z","caller":"traceutil/trace.go:171","msg":"trace[1923698082] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1971; }","duration":"150.517679ms","start":"2024-09-20T20:58:17.710976Z","end":"2024-09-20T20:58:17.861494Z","steps":["trace[1923698082] 'agreement among raft nodes before linearized reading'  (duration: 149.237948ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:17.860256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.092261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T20:58:17.864904Z","caller":"traceutil/trace.go:171","msg":"trace[1341777283] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1971; }","duration":"166.743439ms","start":"2024-09-20T20:58:17.698148Z","end":"2024-09-20T20:58:17.864891Z","steps":["trace[1341777283] 'agreement among raft nodes before linearized reading'  (duration: 162.088037ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:58:33.349055Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1548}
	{"level":"info","ts":"2024-09-20T20:58:33.390006Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1548,"took":"40.358531ms","hash":3091395663,"current-db-size-bytes":7442432,"current-db-size":"7.4 MB","current-db-size-in-use-bytes":4321280,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-09-20T20:58:33.390117Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3091395663,"revision":1548,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T20:58:40.291216Z","caller":"traceutil/trace.go:171","msg":"trace[1233032017] transaction","detail":"{read_only:false; response_revision:2184; number_of_response:1; }","duration":"102.579104ms","start":"2024-09-20T20:58:40.188615Z","end":"2024-09-20T20:58:40.291194Z","steps":["trace[1233032017] 'process raft request'  (duration: 102.479076ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:58:41.625494Z","caller":"traceutil/trace.go:171","msg":"trace[1492248117] transaction","detail":"{read_only:false; response_revision:2186; number_of_response:1; }","duration":"359.734741ms","start":"2024-09-20T20:58:41.265745Z","end":"2024-09-20T20:58:41.625480Z","steps":["trace[1492248117] 'process raft request'  (duration: 359.639083ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:41.625688Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T20:58:41.265729Z","time spent":"359.847904ms","remote":"127.0.0.1:36536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:2163 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:450 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-09-20T20:58:41.635221Z","caller":"traceutil/trace.go:171","msg":"trace[2009681851] transaction","detail":"{read_only:false; response_revision:2188; number_of_response:1; }","duration":"287.660276ms","start":"2024-09-20T20:58:41.347543Z","end":"2024-09-20T20:58:41.635203Z","steps":["trace[2009681851] 'process raft request'  (duration: 287.627001ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:58:41.635333Z","caller":"traceutil/trace.go:171","msg":"trace[230871389] transaction","detail":"{read_only:false; response_revision:2187; number_of_response:1; }","duration":"337.317765ms","start":"2024-09-20T20:58:41.298010Z","end":"2024-09-20T20:58:41.635327Z","steps":["trace[230871389] 'process raft request'  (duration: 337.000319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:41.635417Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T20:58:41.297992Z","time spent":"337.357119ms","remote":"127.0.0.1:36536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-628754\" mod_revision:2121 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-628754\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-628754\" > >"}
	
	
	==> gcp-auth [0f6e90ddc80f9e2033755b52a499b77eb67551eebdc520954077be24aee474ce] <==
	2024/09/20 20:50:09 Ready to write response ...
	2024/09/20 20:50:09 Ready to marshal response ...
	2024/09/20 20:50:09 Ready to write response ...
	2024/09/20 20:58:12 Ready to marshal response ...
	2024/09/20 20:58:12 Ready to write response ...
	2024/09/20 20:58:12 Ready to marshal response ...
	2024/09/20 20:58:12 Ready to write response ...
	2024/09/20 20:58:12 Ready to marshal response ...
	2024/09/20 20:58:12 Ready to write response ...
	2024/09/20 20:58:18 Ready to marshal response ...
	2024/09/20 20:58:18 Ready to write response ...
	2024/09/20 20:58:18 Ready to marshal response ...
	2024/09/20 20:58:18 Ready to write response ...
	2024/09/20 20:58:22 Ready to marshal response ...
	2024/09/20 20:58:22 Ready to write response ...
	2024/09/20 20:58:26 Ready to marshal response ...
	2024/09/20 20:58:26 Ready to write response ...
	2024/09/20 20:58:36 Ready to marshal response ...
	2024/09/20 20:58:36 Ready to write response ...
	2024/09/20 20:59:08 Ready to marshal response ...
	2024/09/20 20:59:08 Ready to write response ...
	2024/09/20 20:59:14 Ready to marshal response ...
	2024/09/20 20:59:14 Ready to write response ...
	2024/09/20 21:01:33 Ready to marshal response ...
	2024/09/20 21:01:33 Ready to write response ...
	
	
	==> kernel <==
	 21:01:44 up 13 min,  0 users,  load average: 0.18, 0.35, 0.30
	Linux addons-628754 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7d18650c60246b08f391f79236e28a5ab464c708213418917832eaaaeca31cee] <==
	 > logger="UnhandledError"
	E0920 20:50:29.073774       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.246.51:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.246.51:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.246.51:443: connect: connection refused" logger="UnhandledError"
	E0920 20:50:29.080229       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.246.51:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.246.51:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.246.51:443: connect: connection refused" logger="UnhandledError"
	I0920 20:50:29.156909       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 20:58:12.607107       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.156.22"}
	E0920 20:58:42.815240       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0920 20:58:47.912557       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 20:59:08.742368       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 20:59:09.785503       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 20:59:14.271182       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 20:59:14.472878       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.230.48"}
	I0920 20:59:24.833550       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.833614       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 20:59:24.867777       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.868426       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 20:59:24.889977       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.890007       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 20:59:24.966388       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.966424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 20:59:24.989650       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.989712       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 20:59:25.868505       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 20:59:25.990598       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 20:59:26.152378       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0920 21:01:33.882210       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.151.149"}
	
	
	==> kube-controller-manager [6308109d7cef9336f1417a99426481f325df44f67d2a5c8358c1db202ca4cc32] <==
	E0920 21:00:41.206177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:00:43.873778       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:00:43.873968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:00:46.758384       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:00:46.758492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:00:50.556897       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:00:50.556995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:01:27.045043       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:27.045218       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:01:27.123121       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:27.123224       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:01:31.194141       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:31.194208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 21:01:33.725736       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="59.531379ms"
	I0920 21:01:33.740660       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="14.507508ms"
	I0920 21:01:33.740874       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="121.677µs"
	I0920 21:01:33.751085       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="89.13µs"
	I0920 21:01:35.500475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.208818ms"
	I0920 21:01:35.500561       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.034µs"
	I0920 21:01:36.185177       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0920 21:01:36.192342       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="6.81µs"
	I0920 21:01:36.197235       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	W0920 21:01:38.071028       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:01:38.071155       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 21:01:43.982450       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-628754"
	
	
	==> kube-proxy [628b1145299a31b0e420178de8fe9f2172d1d9a0ddb8206a4eac7ff3e173af06] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 20:48:44.327957       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 20:48:44.350577       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.204"]
	E0920 20:48:44.350669       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 20:48:44.445024       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 20:48:44.445092       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 20:48:44.445120       1 server_linux.go:169] "Using iptables Proxier"
	I0920 20:48:44.451041       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 20:48:44.451371       1 server.go:483] "Version info" version="v1.31.1"
	I0920 20:48:44.451402       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 20:48:44.453177       1 config.go:199] "Starting service config controller"
	I0920 20:48:44.453217       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 20:48:44.453247       1 config.go:105] "Starting endpoint slice config controller"
	I0920 20:48:44.453252       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 20:48:44.454211       1 config.go:328] "Starting node config controller"
	I0920 20:48:44.454239       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 20:48:44.554025       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 20:48:44.554067       1 shared_informer.go:320] Caches are synced for service config
	I0920 20:48:44.554904       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9f47006040cb11650131683d4882089c05f3a0b291129ff34a9d14f769bfb479] <==
	W0920 20:48:35.797909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 20:48:35.797962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:35.808611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 20:48:35.808694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:35.930646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:35.930696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:35.931077       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 20:48:35.931116       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 20:48:35.942346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 20:48:35.942395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.021065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:36.021114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.033505       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 20:48:36.033567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.050598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:36.050648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.094192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 20:48:36.094242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.113042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:36.113500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.211462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 20:48:36.211605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.341367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 20:48:36.341480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 20:48:37.917584       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 21:01:35 addons-628754 kubelet[1204]: I0920 21:01:35.531281    1204 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-8kfw7" podStartSLOduration=1.729167474 podStartE2EDuration="2.531249108s" podCreationTimestamp="2024-09-20 21:01:33 +0000 UTC" firstStartedPulling="2024-09-20 21:01:34.32808864 +0000 UTC m=+776.790501353" lastFinishedPulling="2024-09-20 21:01:35.130170273 +0000 UTC m=+777.592582987" observedRunningTime="2024-09-20 21:01:35.49317747 +0000 UTC m=+777.955590202" watchObservedRunningTime="2024-09-20 21:01:35.531249108 +0000 UTC m=+777.993661842"
	Sep 20 21:01:35 addons-628754 kubelet[1204]: I0920 21:01:35.719395    1204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e534a7bd-e2bb-49ab-a14f-28256e31640f" path="/var/lib/kubelet/pods/e534a7bd-e2bb-49ab-a14f-28256e31640f/volumes"
	Sep 20 21:01:37 addons-628754 kubelet[1204]: I0920 21:01:37.720253    1204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77866133-b28c-4906-88f8-f0ccc0eedfc5" path="/var/lib/kubelet/pods/77866133-b28c-4906-88f8-f0ccc0eedfc5/volumes"
	Sep 20 21:01:37 addons-628754 kubelet[1204]: I0920 21:01:37.720656    1204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="865d12b2-b990-48b4-aef4-53657d94a027" path="/var/lib/kubelet/pods/865d12b2-b990-48b4-aef4-53657d94a027/volumes"
	Sep 20 21:01:37 addons-628754 kubelet[1204]: E0920 21:01:37.735095    1204 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 21:01:37 addons-628754 kubelet[1204]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 21:01:37 addons-628754 kubelet[1204]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 21:01:37 addons-628754 kubelet[1204]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:01:37 addons-628754 kubelet[1204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:01:38 addons-628754 kubelet[1204]: E0920 21:01:38.173980    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866098173189112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:01:38 addons-628754 kubelet[1204]: E0920 21:01:38.174042    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866098173189112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:01:39 addons-628754 kubelet[1204]: I0920 21:01:39.465501    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e6262e3-eabd-42e7-8d63-a38f9521aa66-webhook-cert\") pod \"5e6262e3-eabd-42e7-8d63-a38f9521aa66\" (UID: \"5e6262e3-eabd-42e7-8d63-a38f9521aa66\") "
	Sep 20 21:01:39 addons-628754 kubelet[1204]: I0920 21:01:39.465568    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz2zq\" (UniqueName: \"kubernetes.io/projected/5e6262e3-eabd-42e7-8d63-a38f9521aa66-kube-api-access-mz2zq\") pod \"5e6262e3-eabd-42e7-8d63-a38f9521aa66\" (UID: \"5e6262e3-eabd-42e7-8d63-a38f9521aa66\") "
	Sep 20 21:01:39 addons-628754 kubelet[1204]: I0920 21:01:39.473323    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6262e3-eabd-42e7-8d63-a38f9521aa66-kube-api-access-mz2zq" (OuterVolumeSpecName: "kube-api-access-mz2zq") pod "5e6262e3-eabd-42e7-8d63-a38f9521aa66" (UID: "5e6262e3-eabd-42e7-8d63-a38f9521aa66"). InnerVolumeSpecName "kube-api-access-mz2zq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 21:01:39 addons-628754 kubelet[1204]: I0920 21:01:39.473645    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6262e3-eabd-42e7-8d63-a38f9521aa66-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5e6262e3-eabd-42e7-8d63-a38f9521aa66" (UID: "5e6262e3-eabd-42e7-8d63-a38f9521aa66"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 20 21:01:39 addons-628754 kubelet[1204]: I0920 21:01:39.497456    1204 scope.go:117] "RemoveContainer" containerID="a0c0ddf88e508d83be96a152a511a0ff6eed1c3737c08b137eaad5e7461c0d22"
	Sep 20 21:01:39 addons-628754 kubelet[1204]: I0920 21:01:39.528188    1204 scope.go:117] "RemoveContainer" containerID="a0c0ddf88e508d83be96a152a511a0ff6eed1c3737c08b137eaad5e7461c0d22"
	Sep 20 21:01:39 addons-628754 kubelet[1204]: E0920 21:01:39.529148    1204 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0c0ddf88e508d83be96a152a511a0ff6eed1c3737c08b137eaad5e7461c0d22\": container with ID starting with a0c0ddf88e508d83be96a152a511a0ff6eed1c3737c08b137eaad5e7461c0d22 not found: ID does not exist" containerID="a0c0ddf88e508d83be96a152a511a0ff6eed1c3737c08b137eaad5e7461c0d22"
	Sep 20 21:01:39 addons-628754 kubelet[1204]: I0920 21:01:39.529204    1204 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c0ddf88e508d83be96a152a511a0ff6eed1c3737c08b137eaad5e7461c0d22"} err="failed to get container status \"a0c0ddf88e508d83be96a152a511a0ff6eed1c3737c08b137eaad5e7461c0d22\": rpc error: code = NotFound desc = could not find container \"a0c0ddf88e508d83be96a152a511a0ff6eed1c3737c08b137eaad5e7461c0d22\": container with ID starting with a0c0ddf88e508d83be96a152a511a0ff6eed1c3737c08b137eaad5e7461c0d22 not found: ID does not exist"
	Sep 20 21:01:39 addons-628754 kubelet[1204]: I0920 21:01:39.566029    1204 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5e6262e3-eabd-42e7-8d63-a38f9521aa66-webhook-cert\") on node \"addons-628754\" DevicePath \"\""
	Sep 20 21:01:39 addons-628754 kubelet[1204]: I0920 21:01:39.566081    1204 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mz2zq\" (UniqueName: \"kubernetes.io/projected/5e6262e3-eabd-42e7-8d63-a38f9521aa66-kube-api-access-mz2zq\") on node \"addons-628754\" DevicePath \"\""
	Sep 20 21:01:39 addons-628754 kubelet[1204]: I0920 21:01:39.719565    1204 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6262e3-eabd-42e7-8d63-a38f9521aa66" path="/var/lib/kubelet/pods/5e6262e3-eabd-42e7-8d63-a38f9521aa66/volumes"
	Sep 20 21:01:41 addons-628754 kubelet[1204]: I0920 21:01:41.956554    1204 scope.go:117] "RemoveContainer" containerID="beed3a84c7c5ea044cf0bc8e69b00e19e95db3c686a0cf234c4f1e5a1478bc51"
	Sep 20 21:01:41 addons-628754 kubelet[1204]: I0920 21:01:41.972401    1204 scope.go:117] "RemoveContainer" containerID="f7eacbc7a27b8fbcf6b6d3171788408c55d4743ad098922818bb9e5bd37fdbdd"
	Sep 20 21:01:43 addons-628754 kubelet[1204]: E0920 21:01:43.718769    1204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="aea8b93f-0c58-47be-a497-a87b509f0c6b"
	
	
	==> storage-provisioner [d26a1572da03a177fa626061edd39e2d827d6846cbe46a95753c4c076551c8c6] <==
	I0920 20:48:51.342179       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 20:48:51.371362       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 20:48:51.371432       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 20:48:51.522612       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 20:48:51.529164       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-628754_1fadd5d2-4217-4dae-b00a-947684a01256!
	I0920 20:48:51.523653       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7272c10f-2050-4885-9a02-b4fcd176512f", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-628754_1fadd5d2-4217-4dae-b00a-947684a01256 became leader
	I0920 20:48:51.731924       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-628754_1fadd5d2-4217-4dae-b00a-947684a01256!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-628754 -n addons-628754
helpers_test.go:261: (dbg) Run:  kubectl --context addons-628754 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-628754 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-628754 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-628754/192.168.39.204
	Start Time:       Fri, 20 Sep 2024 20:50:09 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s2lkf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s2lkf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-628754
	  Normal   Pulling    10m (x4 over 11m)    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     10m (x4 over 11m)    kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     10m (x4 over 11m)    kubelet            Error: ErrImagePull
	  Warning  Failed     9m53s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    91s (x43 over 11m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.40s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (356.2s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.561688ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-jcnct" [d19e3126-c1ed-4af9-b7ab-80e6e4a06949] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004638347s
addons_test.go:413: (dbg) Run:  kubectl --context addons-628754 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-628754 top pods -n kube-system: exit status 1 (65.387579ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2s7hq, age: 9m54.121950185s

                                                
                                                
** /stderr **
I0920 20:58:36.123589   16785 retry.go:31] will retry after 3.168964381s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-628754 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-628754 top pods -n kube-system: exit status 1 (66.362734ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2s7hq, age: 9m57.357310213s

                                                
                                                
** /stderr **
I0920 20:58:39.359424   16785 retry.go:31] will retry after 3.85118211s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-628754 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-628754 top pods -n kube-system: exit status 1 (66.598478ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2s7hq, age: 10m1.27701152s

                                                
                                                
** /stderr **
I0920 20:58:43.278420   16785 retry.go:31] will retry after 5.25523083s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-628754 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-628754 top pods -n kube-system: exit status 1 (75.147739ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2s7hq, age: 10m6.60594802s

                                                
                                                
** /stderr **
I0920 20:58:48.609197   16785 retry.go:31] will retry after 9.726488921s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-628754 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-628754 top pods -n kube-system: exit status 1 (67.537716ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2s7hq, age: 10m16.402247311s

                                                
                                                
** /stderr **
I0920 20:58:58.404259   16785 retry.go:31] will retry after 8.644390825s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-628754 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-628754 top pods -n kube-system: exit status 1 (67.398455ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2s7hq, age: 10m25.115053086s

                                                
                                                
** /stderr **
I0920 20:59:07.116662   16785 retry.go:31] will retry after 17.373965195s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-628754 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-628754 top pods -n kube-system: exit status 1 (77.240711ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2s7hq, age: 10m42.566812095s

                                                
                                                
** /stderr **
I0920 20:59:24.568686   16785 retry.go:31] will retry after 48.020276906s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-628754 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-628754 top pods -n kube-system: exit status 1 (65.104383ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2s7hq, age: 11m30.654353681s

                                                
                                                
** /stderr **
I0920 21:00:12.656366   16785 retry.go:31] will retry after 47.407157933s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-628754 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-628754 top pods -n kube-system: exit status 1 (62.445194ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2s7hq, age: 12m18.125122426s

                                                
                                                
** /stderr **
I0920 21:01:00.126834   16785 retry.go:31] will retry after 1m17.852255825s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-628754 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-628754 top pods -n kube-system: exit status 1 (63.868245ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2s7hq, age: 13m36.048030171s

                                                
                                                
** /stderr **
I0920 21:02:18.049738   16785 retry.go:31] will retry after 1m4.620662471s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-628754 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-628754 top pods -n kube-system: exit status 1 (67.64284ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2s7hq, age: 14m40.738039392s

                                                
                                                
** /stderr **
I0920 21:03:22.740309   16785 retry.go:31] will retry after 1m0.593174366s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-628754 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-628754 top pods -n kube-system: exit status 1 (66.750859ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-2s7hq, age: 15m41.400900589s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-628754 -n addons-628754
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-628754 logs -n 25: (1.380865621s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-454496                                                                     | download-only-454496 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-437637                                                                     | download-only-437637 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-454496                                                                     | download-only-454496 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-545297 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | binary-mirror-545297                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:41175                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-545297                                                                     | binary-mirror-545297 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| addons  | disable dashboard -p                                                                        | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | addons-628754                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | addons-628754                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-628754 --wait=true                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:50 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | -p addons-628754                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | -p addons-628754                                                                            |                      |         |         |                     |                     |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-628754 ssh cat                                                                       | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | /opt/local-path-provisioner/pvc-3e8f244a-7a2e-48ce-baa7-c2ca4ae064da_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:58 UTC | 20 Sep 24 20:58 UTC |
	|         | addons-628754                                                                               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC | 20 Sep 24 20:59 UTC |
	|         | addons-628754                                                                               |                      |         |         |                     |                     |
	| addons  | addons-628754 addons                                                                        | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC | 20 Sep 24 20:59 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-628754 ip                                                                            | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC | 20 Sep 24 20:59 UTC |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC | 20 Sep 24 20:59 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-628754 addons                                                                        | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC | 20 Sep 24 20:59 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-628754 ssh curl -s                                                                   | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 20:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-628754 ip                                                                            | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 21:01 UTC | 20 Sep 24 21:01 UTC |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 21:01 UTC | 20 Sep 24 21:01 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-628754 addons disable                                                                | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 21:01 UTC | 20 Sep 24 21:01 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-628754 addons                                                                        | addons-628754        | jenkins | v1.34.0 | 20 Sep 24 21:04 UTC | 20 Sep 24 21:04 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:47:57
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:47:57.083784   17426 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:47:57.084049   17426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:57.084058   17426 out.go:358] Setting ErrFile to fd 2...
	I0920 20:47:57.084063   17426 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:57.084249   17426 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 20:47:57.084816   17426 out.go:352] Setting JSON to false
	I0920 20:47:57.085616   17426 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1825,"bootTime":1726863452,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:47:57.085733   17426 start.go:139] virtualization: kvm guest
	I0920 20:47:57.088022   17426 out.go:177] * [addons-628754] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 20:47:57.089697   17426 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 20:47:57.089699   17426 notify.go:220] Checking for updates...
	I0920 20:47:57.091121   17426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:47:57.092364   17426 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 20:47:57.093781   17426 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 20:47:57.095208   17426 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 20:47:57.096741   17426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 20:47:57.098321   17426 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:47:57.129581   17426 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 20:47:57.130933   17426 start.go:297] selected driver: kvm2
	I0920 20:47:57.131005   17426 start.go:901] validating driver "kvm2" against <nil>
	I0920 20:47:57.131026   17426 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 20:47:57.131969   17426 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:57.132061   17426 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 20:47:57.147486   17426 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 20:47:57.147545   17426 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 20:47:57.147780   17426 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:47:57.147811   17426 cni.go:84] Creating CNI manager for ""
	I0920 20:47:57.147852   17426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 20:47:57.147859   17426 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 20:47:57.147906   17426 start.go:340] cluster config:
	{Name:addons-628754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:47:57.148009   17426 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:57.150689   17426 out.go:177] * Starting "addons-628754" primary control-plane node in "addons-628754" cluster
	I0920 20:47:57.151657   17426 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 20:47:57.151699   17426 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 20:47:57.151709   17426 cache.go:56] Caching tarball of preloaded images
	I0920 20:47:57.151783   17426 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 20:47:57.151793   17426 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 20:47:57.152110   17426 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/config.json ...
	I0920 20:47:57.152131   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/config.json: {Name:mkca579941f55a7fbb4bc1fe5edc46778dec65b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:47:57.152274   17426 start.go:360] acquireMachinesLock for addons-628754: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 20:47:57.152320   17426 start.go:364] duration metric: took 31.79µs to acquireMachinesLock for "addons-628754"
	I0920 20:47:57.152337   17426 start.go:93] Provisioning new machine with config: &{Name:addons-628754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 20:47:57.152388   17426 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 20:47:57.153857   17426 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0920 20:47:57.153989   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:47:57.154024   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:47:57.168714   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36829
	I0920 20:47:57.169174   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:47:57.169774   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:47:57.169793   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:47:57.170182   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:47:57.170355   17426 main.go:141] libmachine: (addons-628754) Calling .GetMachineName
	I0920 20:47:57.170487   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:47:57.170634   17426 start.go:159] libmachine.API.Create for "addons-628754" (driver="kvm2")
	I0920 20:47:57.170665   17426 client.go:168] LocalClient.Create starting
	I0920 20:47:57.170716   17426 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 20:47:57.285508   17426 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 20:47:57.498317   17426 main.go:141] libmachine: Running pre-create checks...
	I0920 20:47:57.498341   17426 main.go:141] libmachine: (addons-628754) Calling .PreCreateCheck
	I0920 20:47:57.498846   17426 main.go:141] libmachine: (addons-628754) Calling .GetConfigRaw
	I0920 20:47:57.499269   17426 main.go:141] libmachine: Creating machine...
	I0920 20:47:57.499283   17426 main.go:141] libmachine: (addons-628754) Calling .Create
	I0920 20:47:57.499443   17426 main.go:141] libmachine: (addons-628754) Creating KVM machine...
	I0920 20:47:57.500685   17426 main.go:141] libmachine: (addons-628754) DBG | found existing default KVM network
	I0920 20:47:57.501409   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.501274   17448 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002111f0}
	I0920 20:47:57.501455   17426 main.go:141] libmachine: (addons-628754) DBG | created network xml: 
	I0920 20:47:57.501475   17426 main.go:141] libmachine: (addons-628754) DBG | <network>
	I0920 20:47:57.501483   17426 main.go:141] libmachine: (addons-628754) DBG |   <name>mk-addons-628754</name>
	I0920 20:47:57.501488   17426 main.go:141] libmachine: (addons-628754) DBG |   <dns enable='no'/>
	I0920 20:47:57.501495   17426 main.go:141] libmachine: (addons-628754) DBG |   
	I0920 20:47:57.501501   17426 main.go:141] libmachine: (addons-628754) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 20:47:57.501506   17426 main.go:141] libmachine: (addons-628754) DBG |     <dhcp>
	I0920 20:47:57.501513   17426 main.go:141] libmachine: (addons-628754) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 20:47:57.501518   17426 main.go:141] libmachine: (addons-628754) DBG |     </dhcp>
	I0920 20:47:57.501522   17426 main.go:141] libmachine: (addons-628754) DBG |   </ip>
	I0920 20:47:57.501529   17426 main.go:141] libmachine: (addons-628754) DBG |   
	I0920 20:47:57.501533   17426 main.go:141] libmachine: (addons-628754) DBG | </network>
	I0920 20:47:57.501539   17426 main.go:141] libmachine: (addons-628754) DBG | 
	I0920 20:47:57.508174   17426 main.go:141] libmachine: (addons-628754) DBG | trying to create private KVM network mk-addons-628754 192.168.39.0/24...
	I0920 20:47:57.573213   17426 main.go:141] libmachine: (addons-628754) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754 ...
	I0920 20:47:57.573243   17426 main.go:141] libmachine: (addons-628754) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 20:47:57.573254   17426 main.go:141] libmachine: (addons-628754) DBG | private KVM network mk-addons-628754 192.168.39.0/24 created
	I0920 20:47:57.573273   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.573184   17448 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 20:47:57.573460   17426 main.go:141] libmachine: (addons-628754) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 20:47:57.830461   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.830307   17448 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa...
	I0920 20:47:57.903386   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.903247   17448 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/addons-628754.rawdisk...
	I0920 20:47:57.903428   17426 main.go:141] libmachine: (addons-628754) DBG | Writing magic tar header
	I0920 20:47:57.903439   17426 main.go:141] libmachine: (addons-628754) DBG | Writing SSH key tar header
	I0920 20:47:57.903449   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:57.903380   17448 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754 ...
	I0920 20:47:57.903466   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754
	I0920 20:47:57.903508   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754 (perms=drwx------)
	I0920 20:47:57.903526   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 20:47:57.903538   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 20:47:57.903554   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 20:47:57.903628   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 20:47:57.903663   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 20:47:57.903677   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 20:47:57.903688   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 20:47:57.903699   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 20:47:57.903713   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home/jenkins
	I0920 20:47:57.903723   17426 main.go:141] libmachine: (addons-628754) DBG | Checking permissions on dir: /home
	I0920 20:47:57.903736   17426 main.go:141] libmachine: (addons-628754) DBG | Skipping /home - not owner
	I0920 20:47:57.903755   17426 main.go:141] libmachine: (addons-628754) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 20:47:57.903769   17426 main.go:141] libmachine: (addons-628754) Creating domain...
	I0920 20:47:57.904753   17426 main.go:141] libmachine: (addons-628754) define libvirt domain using xml: 
	I0920 20:47:57.904778   17426 main.go:141] libmachine: (addons-628754) <domain type='kvm'>
	I0920 20:47:57.904788   17426 main.go:141] libmachine: (addons-628754)   <name>addons-628754</name>
	I0920 20:47:57.904826   17426 main.go:141] libmachine: (addons-628754)   <memory unit='MiB'>4000</memory>
	I0920 20:47:57.904840   17426 main.go:141] libmachine: (addons-628754)   <vcpu>2</vcpu>
	I0920 20:47:57.904849   17426 main.go:141] libmachine: (addons-628754)   <features>
	I0920 20:47:57.904882   17426 main.go:141] libmachine: (addons-628754)     <acpi/>
	I0920 20:47:57.904904   17426 main.go:141] libmachine: (addons-628754)     <apic/>
	I0920 20:47:57.904935   17426 main.go:141] libmachine: (addons-628754)     <pae/>
	I0920 20:47:57.904957   17426 main.go:141] libmachine: (addons-628754)     
	I0920 20:47:57.904970   17426 main.go:141] libmachine: (addons-628754)   </features>
	I0920 20:47:57.904997   17426 main.go:141] libmachine: (addons-628754)   <cpu mode='host-passthrough'>
	I0920 20:47:57.905010   17426 main.go:141] libmachine: (addons-628754)   
	I0920 20:47:57.905023   17426 main.go:141] libmachine: (addons-628754)   </cpu>
	I0920 20:47:57.905034   17426 main.go:141] libmachine: (addons-628754)   <os>
	I0920 20:47:57.905043   17426 main.go:141] libmachine: (addons-628754)     <type>hvm</type>
	I0920 20:47:57.905052   17426 main.go:141] libmachine: (addons-628754)     <boot dev='cdrom'/>
	I0920 20:47:57.905061   17426 main.go:141] libmachine: (addons-628754)     <boot dev='hd'/>
	I0920 20:47:57.905071   17426 main.go:141] libmachine: (addons-628754)     <bootmenu enable='no'/>
	I0920 20:47:57.905078   17426 main.go:141] libmachine: (addons-628754)   </os>
	I0920 20:47:57.905087   17426 main.go:141] libmachine: (addons-628754)   <devices>
	I0920 20:47:57.905096   17426 main.go:141] libmachine: (addons-628754)     <disk type='file' device='cdrom'>
	I0920 20:47:57.905111   17426 main.go:141] libmachine: (addons-628754)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/boot2docker.iso'/>
	I0920 20:47:57.905125   17426 main.go:141] libmachine: (addons-628754)       <target dev='hdc' bus='scsi'/>
	I0920 20:47:57.905135   17426 main.go:141] libmachine: (addons-628754)       <readonly/>
	I0920 20:47:57.905150   17426 main.go:141] libmachine: (addons-628754)     </disk>
	I0920 20:47:57.905168   17426 main.go:141] libmachine: (addons-628754)     <disk type='file' device='disk'>
	I0920 20:47:57.905191   17426 main.go:141] libmachine: (addons-628754)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 20:47:57.905212   17426 main.go:141] libmachine: (addons-628754)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/addons-628754.rawdisk'/>
	I0920 20:47:57.905226   17426 main.go:141] libmachine: (addons-628754)       <target dev='hda' bus='virtio'/>
	I0920 20:47:57.905236   17426 main.go:141] libmachine: (addons-628754)     </disk>
	I0920 20:47:57.905246   17426 main.go:141] libmachine: (addons-628754)     <interface type='network'>
	I0920 20:47:57.905258   17426 main.go:141] libmachine: (addons-628754)       <source network='mk-addons-628754'/>
	I0920 20:47:57.905267   17426 main.go:141] libmachine: (addons-628754)       <model type='virtio'/>
	I0920 20:47:57.905277   17426 main.go:141] libmachine: (addons-628754)     </interface>
	I0920 20:47:57.905292   17426 main.go:141] libmachine: (addons-628754)     <interface type='network'>
	I0920 20:47:57.905308   17426 main.go:141] libmachine: (addons-628754)       <source network='default'/>
	I0920 20:47:57.905320   17426 main.go:141] libmachine: (addons-628754)       <model type='virtio'/>
	I0920 20:47:57.905335   17426 main.go:141] libmachine: (addons-628754)     </interface>
	I0920 20:47:57.905345   17426 main.go:141] libmachine: (addons-628754)     <serial type='pty'>
	I0920 20:47:57.905351   17426 main.go:141] libmachine: (addons-628754)       <target port='0'/>
	I0920 20:47:57.905362   17426 main.go:141] libmachine: (addons-628754)     </serial>
	I0920 20:47:57.905372   17426 main.go:141] libmachine: (addons-628754)     <console type='pty'>
	I0920 20:47:57.905387   17426 main.go:141] libmachine: (addons-628754)       <target type='serial' port='0'/>
	I0920 20:47:57.905404   17426 main.go:141] libmachine: (addons-628754)     </console>
	I0920 20:47:57.905414   17426 main.go:141] libmachine: (addons-628754)     <rng model='virtio'>
	I0920 20:47:57.905426   17426 main.go:141] libmachine: (addons-628754)       <backend model='random'>/dev/random</backend>
	I0920 20:47:57.905436   17426 main.go:141] libmachine: (addons-628754)     </rng>
	I0920 20:47:57.905445   17426 main.go:141] libmachine: (addons-628754)     
	I0920 20:47:57.905453   17426 main.go:141] libmachine: (addons-628754)     
	I0920 20:47:57.905463   17426 main.go:141] libmachine: (addons-628754)   </devices>
	I0920 20:47:57.905471   17426 main.go:141] libmachine: (addons-628754) </domain>
	I0920 20:47:57.905483   17426 main.go:141] libmachine: (addons-628754) 
	I0920 20:47:57.911357   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:e6:b4:6f in network default
	I0920 20:47:57.911872   17426 main.go:141] libmachine: (addons-628754) Ensuring networks are active...
	I0920 20:47:57.911889   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:47:57.912569   17426 main.go:141] libmachine: (addons-628754) Ensuring network default is active
	I0920 20:47:57.912863   17426 main.go:141] libmachine: (addons-628754) Ensuring network mk-addons-628754 is active
	I0920 20:47:57.913288   17426 main.go:141] libmachine: (addons-628754) Getting domain xml...
	I0920 20:47:57.913898   17426 main.go:141] libmachine: (addons-628754) Creating domain...
	I0920 20:47:59.326755   17426 main.go:141] libmachine: (addons-628754) Waiting to get IP...
	I0920 20:47:59.327557   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:47:59.328005   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:47:59.328033   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:59.327972   17448 retry.go:31] will retry after 302.559317ms: waiting for machine to come up
	I0920 20:47:59.632500   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:47:59.632880   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:47:59.632917   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:47:59.632857   17448 retry.go:31] will retry after 377.023743ms: waiting for machine to come up
	I0920 20:48:00.011524   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:00.012077   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:00.012099   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:00.012026   17448 retry.go:31] will retry after 369.422393ms: waiting for machine to come up
	I0920 20:48:00.382598   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:00.383102   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:00.383131   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:00.383057   17448 retry.go:31] will retry after 534.800716ms: waiting for machine to come up
	I0920 20:48:00.919797   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:00.920325   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:00.920348   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:00.920289   17448 retry.go:31] will retry after 493.667737ms: waiting for machine to come up
	I0920 20:48:01.416002   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:01.416381   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:01.416402   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:01.416339   17448 retry.go:31] will retry after 621.848692ms: waiting for machine to come up
	I0920 20:48:02.040290   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:02.040677   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:02.040701   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:02.040643   17448 retry.go:31] will retry after 861.306251ms: waiting for machine to come up
	I0920 20:48:02.903325   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:02.903717   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:02.903735   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:02.903678   17448 retry.go:31] will retry after 1.459954527s: waiting for machine to come up
	I0920 20:48:04.365414   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:04.365855   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:04.365878   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:04.365791   17448 retry.go:31] will retry after 1.358721041s: waiting for machine to come up
	I0920 20:48:05.726477   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:05.726947   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:05.726992   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:05.726912   17448 retry.go:31] will retry after 1.859066738s: waiting for machine to come up
	I0920 20:48:07.588072   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:07.588572   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:07.588597   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:07.588527   17448 retry.go:31] will retry after 2.363120827s: waiting for machine to come up
	I0920 20:48:09.953935   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:09.954415   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:09.954440   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:09.954387   17448 retry.go:31] will retry after 2.276156744s: waiting for machine to come up
	I0920 20:48:12.232531   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:12.232978   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:12.233000   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:12.232935   17448 retry.go:31] will retry after 3.494675482s: waiting for machine to come up
	I0920 20:48:15.728788   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:15.729264   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find current IP address of domain addons-628754 in network mk-addons-628754
	I0920 20:48:15.729297   17426 main.go:141] libmachine: (addons-628754) DBG | I0920 20:48:15.729233   17448 retry.go:31] will retry after 3.623474403s: waiting for machine to come up
	I0920 20:48:19.353890   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.354300   17426 main.go:141] libmachine: (addons-628754) Found IP for machine: 192.168.39.204
	I0920 20:48:19.354318   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has current primary IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.354324   17426 main.go:141] libmachine: (addons-628754) Reserving static IP address...
	I0920 20:48:19.354718   17426 main.go:141] libmachine: (addons-628754) DBG | unable to find host DHCP lease matching {name: "addons-628754", mac: "52:54:00:59:f8:c3", ip: "192.168.39.204"} in network mk-addons-628754
	I0920 20:48:19.430989   17426 main.go:141] libmachine: (addons-628754) DBG | Getting to WaitForSSH function...
	I0920 20:48:19.431022   17426 main.go:141] libmachine: (addons-628754) Reserved static IP address: 192.168.39.204
	I0920 20:48:19.431035   17426 main.go:141] libmachine: (addons-628754) Waiting for SSH to be available...
	I0920 20:48:19.433368   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.433729   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.433754   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.433893   17426 main.go:141] libmachine: (addons-628754) DBG | Using SSH client type: external
	I0920 20:48:19.433951   17426 main.go:141] libmachine: (addons-628754) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa (-rw-------)
	I0920 20:48:19.433990   17426 main.go:141] libmachine: (addons-628754) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.204 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 20:48:19.434007   17426 main.go:141] libmachine: (addons-628754) DBG | About to run SSH command:
	I0920 20:48:19.434034   17426 main.go:141] libmachine: (addons-628754) DBG | exit 0
	I0920 20:48:19.567000   17426 main.go:141] libmachine: (addons-628754) DBG | SSH cmd err, output: <nil>: 
	I0920 20:48:19.567274   17426 main.go:141] libmachine: (addons-628754) KVM machine creation complete!
	I0920 20:48:19.567605   17426 main.go:141] libmachine: (addons-628754) Calling .GetConfigRaw
	I0920 20:48:19.568125   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:19.568301   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:19.568423   17426 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 20:48:19.568447   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:19.569781   17426 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 20:48:19.569796   17426 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 20:48:19.569803   17426 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 20:48:19.569812   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:19.572230   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.572609   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.572636   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.572734   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:19.572904   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.573062   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.573263   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:19.573417   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:19.573596   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:19.573606   17426 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 20:48:19.686484   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 20:48:19.686510   17426 main.go:141] libmachine: Detecting the provisioner...
	I0920 20:48:19.686520   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:19.689488   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.689939   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.689963   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.690171   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:19.690367   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.690533   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.690675   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:19.690829   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:19.691037   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:19.691051   17426 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 20:48:19.803894   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 20:48:19.803943   17426 main.go:141] libmachine: found compatible host: buildroot
	I0920 20:48:19.803949   17426 main.go:141] libmachine: Provisioning with buildroot...
	I0920 20:48:19.803959   17426 main.go:141] libmachine: (addons-628754) Calling .GetMachineName
	I0920 20:48:19.804229   17426 buildroot.go:166] provisioning hostname "addons-628754"
	I0920 20:48:19.804257   17426 main.go:141] libmachine: (addons-628754) Calling .GetMachineName
	I0920 20:48:19.804429   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:19.807930   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.808301   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.808325   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.808470   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:19.808640   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.808753   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.808855   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:19.808986   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:19.809178   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:19.809194   17426 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-628754 && echo "addons-628754" | sudo tee /etc/hostname
	I0920 20:48:19.933979   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-628754
	
	I0920 20:48:19.934010   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:19.936671   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.936966   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:19.937020   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:19.937216   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:19.937412   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.937572   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:19.937681   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:19.937835   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:19.938005   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:19.938027   17426 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-628754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-628754/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-628754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 20:48:20.060621   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 20:48:20.060648   17426 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 20:48:20.060666   17426 buildroot.go:174] setting up certificates
	I0920 20:48:20.060675   17426 provision.go:84] configureAuth start
	I0920 20:48:20.060683   17426 main.go:141] libmachine: (addons-628754) Calling .GetMachineName
	I0920 20:48:20.060932   17426 main.go:141] libmachine: (addons-628754) Calling .GetIP
	I0920 20:48:20.063555   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.063906   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.063931   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.064077   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.066288   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.066581   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.066599   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.066772   17426 provision.go:143] copyHostCerts
	I0920 20:48:20.066842   17426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 20:48:20.067014   17426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 20:48:20.067102   17426 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 20:48:20.067175   17426 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.addons-628754 san=[127.0.0.1 192.168.39.204 addons-628754 localhost minikube]
	I0920 20:48:20.207459   17426 provision.go:177] copyRemoteCerts
	I0920 20:48:20.207541   17426 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 20:48:20.207568   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.210051   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.210438   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.210465   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.210641   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.210815   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.210936   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.211089   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:20.297517   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 20:48:20.323502   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 20:48:20.348382   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 20:48:20.372714   17426 provision.go:87] duration metric: took 312.024389ms to configureAuth
	I0920 20:48:20.372743   17426 buildroot.go:189] setting minikube options for container-runtime
	I0920 20:48:20.372974   17426 config.go:182] Loaded profile config "addons-628754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 20:48:20.373093   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.375745   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.376103   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.376129   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.376286   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.376456   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.376577   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.376741   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.376885   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:20.377069   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:20.377088   17426 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 20:48:20.603620   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 20:48:20.603645   17426 main.go:141] libmachine: Checking connection to Docker...
	I0920 20:48:20.603654   17426 main.go:141] libmachine: (addons-628754) Calling .GetURL
	I0920 20:48:20.605014   17426 main.go:141] libmachine: (addons-628754) DBG | Using libvirt version 6000000
	I0920 20:48:20.607455   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.607777   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.607807   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.607965   17426 main.go:141] libmachine: Docker is up and running!
	I0920 20:48:20.607980   17426 main.go:141] libmachine: Reticulating splines...
	I0920 20:48:20.607989   17426 client.go:171] duration metric: took 23.437316144s to LocalClient.Create
	I0920 20:48:20.608018   17426 start.go:167] duration metric: took 23.437384362s to libmachine.API.Create "addons-628754"
	I0920 20:48:20.608030   17426 start.go:293] postStartSetup for "addons-628754" (driver="kvm2")
	I0920 20:48:20.608041   17426 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 20:48:20.608073   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.608293   17426 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 20:48:20.608319   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.610317   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.610627   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.610652   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.610781   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.610921   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.611034   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.611124   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:20.698116   17426 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 20:48:20.702771   17426 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 20:48:20.702801   17426 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 20:48:20.702873   17426 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 20:48:20.702896   17426 start.go:296] duration metric: took 94.859468ms for postStartSetup
	I0920 20:48:20.702927   17426 main.go:141] libmachine: (addons-628754) Calling .GetConfigRaw
	I0920 20:48:20.703462   17426 main.go:141] libmachine: (addons-628754) Calling .GetIP
	I0920 20:48:20.705754   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.706028   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.706043   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.706282   17426 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/config.json ...
	I0920 20:48:20.706477   17426 start.go:128] duration metric: took 23.554079183s to createHost
	I0920 20:48:20.706501   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.708765   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.709058   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.709077   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.709229   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.709401   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.709532   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.709626   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.709751   17426 main.go:141] libmachine: Using SSH client type: native
	I0920 20:48:20.709947   17426 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.204 22 <nil> <nil>}
	I0920 20:48:20.709959   17426 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 20:48:20.824097   17426 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726865300.788019293
	
	I0920 20:48:20.824119   17426 fix.go:216] guest clock: 1726865300.788019293
	I0920 20:48:20.824129   17426 fix.go:229] Guest: 2024-09-20 20:48:20.788019293 +0000 UTC Remote: 2024-09-20 20:48:20.706488068 +0000 UTC m=+23.656717453 (delta=81.531225ms)
	I0920 20:48:20.824169   17426 fix.go:200] guest clock delta is within tolerance: 81.531225ms
	I0920 20:48:20.824174   17426 start.go:83] releasing machines lock for "addons-628754", held for 23.671844938s
	I0920 20:48:20.824195   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.824499   17426 main.go:141] libmachine: (addons-628754) Calling .GetIP
	I0920 20:48:20.827015   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.827394   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.827425   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.827590   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.828104   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.828268   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:20.828372   17426 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 20:48:20.828427   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.828488   17426 ssh_runner.go:195] Run: cat /version.json
	I0920 20:48:20.828511   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:20.831137   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.831303   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.831481   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.831511   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.831658   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.831693   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:20.831734   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:20.831898   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:20.831916   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.832049   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.832060   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:20.832158   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:20.832182   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:20.832264   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:20.937268   17426 ssh_runner.go:195] Run: systemctl --version
	I0920 20:48:20.943278   17426 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 20:48:21.102731   17426 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 20:48:21.109306   17426 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 20:48:21.109385   17426 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 20:48:21.125977   17426 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 20:48:21.126016   17426 start.go:495] detecting cgroup driver to use...
	I0920 20:48:21.126077   17426 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 20:48:21.141923   17426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 20:48:21.156336   17426 docker.go:217] disabling cri-docker service (if available) ...
	I0920 20:48:21.156395   17426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 20:48:21.170041   17426 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 20:48:21.183756   17426 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 20:48:21.294187   17426 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 20:48:21.437230   17426 docker.go:233] disabling docker service ...
	I0920 20:48:21.437293   17426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 20:48:21.452740   17426 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 20:48:21.465572   17426 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 20:48:21.594165   17426 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 20:48:21.703184   17426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 20:48:21.716853   17426 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 20:48:21.735742   17426 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 20:48:21.735807   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.746697   17426 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 20:48:21.746783   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.758620   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.769552   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.780345   17426 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 20:48:21.791319   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.802437   17426 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.820563   17426 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 20:48:21.831706   17426 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 20:48:21.841879   17426 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 20:48:21.841934   17426 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 20:48:21.855580   17426 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 20:48:21.865881   17426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:21.975704   17426 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 20:48:22.076074   17426 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 20:48:22.076172   17426 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 20:48:22.081028   17426 start.go:563] Will wait 60s for crictl version
	I0920 20:48:22.081104   17426 ssh_runner.go:195] Run: which crictl
	I0920 20:48:22.085024   17426 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 20:48:22.123994   17426 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 20:48:22.124131   17426 ssh_runner.go:195] Run: crio --version
	I0920 20:48:22.155608   17426 ssh_runner.go:195] Run: crio --version
	I0920 20:48:22.254304   17426 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 20:48:22.271322   17426 main.go:141] libmachine: (addons-628754) Calling .GetIP
	I0920 20:48:22.274135   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:22.274494   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:22.274519   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:22.274758   17426 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 20:48:22.279562   17426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:48:22.292693   17426 kubeadm.go:883] updating cluster {Name:addons-628754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 20:48:22.292826   17426 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 20:48:22.292872   17426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 20:48:22.328943   17426 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 20:48:22.329008   17426 ssh_runner.go:195] Run: which lz4
	I0920 20:48:22.333252   17426 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 20:48:22.337720   17426 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 20:48:22.337751   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 20:48:23.694739   17426 crio.go:462] duration metric: took 1.361518572s to copy over tarball
	I0920 20:48:23.694808   17426 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 20:48:25.883886   17426 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.18904765s)
	I0920 20:48:25.883920   17426 crio.go:469] duration metric: took 2.18915492s to extract the tarball
	I0920 20:48:25.883930   17426 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 20:48:25.922070   17426 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 20:48:25.969998   17426 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 20:48:25.970026   17426 cache_images.go:84] Images are preloaded, skipping loading
	I0920 20:48:25.970037   17426 kubeadm.go:934] updating node { 192.168.39.204 8443 v1.31.1 crio true true} ...
	I0920 20:48:25.970130   17426 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-628754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 20:48:25.970191   17426 ssh_runner.go:195] Run: crio config
	I0920 20:48:26.021952   17426 cni.go:84] Creating CNI manager for ""
	I0920 20:48:26.021980   17426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 20:48:26.021993   17426 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 20:48:26.022019   17426 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.204 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-628754 NodeName:addons-628754 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 20:48:26.022191   17426 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.204
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-628754"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 20:48:26.022274   17426 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 20:48:26.032477   17426 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 20:48:26.032538   17426 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 20:48:26.042683   17426 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 20:48:26.059732   17426 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 20:48:26.076258   17426 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0920 20:48:26.093707   17426 ssh_runner.go:195] Run: grep 192.168.39.204	control-plane.minikube.internal$ /etc/hosts
	I0920 20:48:26.097791   17426 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:48:26.110716   17426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:26.236386   17426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 20:48:26.253722   17426 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754 for IP: 192.168.39.204
	I0920 20:48:26.253745   17426 certs.go:194] generating shared ca certs ...
	I0920 20:48:26.253764   17426 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.253934   17426 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 20:48:26.484930   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt ...
	I0920 20:48:26.484965   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt: {Name:mke2d7aeeb7db3e9800a1083a3ff2f86790ad6b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.485165   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key ...
	I0920 20:48:26.485179   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key: {Name:mk17ffdadb7b9aab721bb9d1c788c9a2ddb298de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.485278   17426 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 20:48:26.719056   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt ...
	I0920 20:48:26.719086   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt: {Name:mkfada96f4c7b2fd8274928ed1dbb16b11a88c17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.719248   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key ...
	I0920 20:48:26.719258   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key: {Name:mkd5ebdfcf48f4c7d47e8f4a6008f58fc89c2fa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.719323   17426 certs.go:256] generating profile certs ...
	I0920 20:48:26.719397   17426 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.key
	I0920 20:48:26.719413   17426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt with IP's: []
	I0920 20:48:26.933060   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt ...
	I0920 20:48:26.933092   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: {Name:mk91a397bc4ff329a96b4b263f0366897ac67b3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.933260   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.key ...
	I0920 20:48:26.933270   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.key: {Name:mkb7eb1952eb7eecce91764549c5ad4831a240c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:26.933342   17426 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key.23a8a0fd
	I0920 20:48:26.933369   17426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt.23a8a0fd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.204]
	I0920 20:48:27.039700   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt.23a8a0fd ...
	I0920 20:48:27.039729   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt.23a8a0fd: {Name:mke2e260dbaf2f40e6a6aae6719316d894241e83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:27.039890   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key.23a8a0fd ...
	I0920 20:48:27.039902   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key.23a8a0fd: {Name:mk8e28dd00075b54e6ff8d4a4eca983c0a2a96b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:27.039979   17426 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt.23a8a0fd -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt
	I0920 20:48:27.040049   17426 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key.23a8a0fd -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key
	I0920 20:48:27.040093   17426 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.key
	I0920 20:48:27.040110   17426 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.crt with IP's: []
	I0920 20:48:27.264693   17426 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.crt ...
	I0920 20:48:27.264722   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.crt: {Name:mkb317a338bad2a13df21d64fc6002a23cf2c57f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:27.264890   17426 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.key ...
	I0920 20:48:27.264900   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.key: {Name:mk1033c85ac3998dbc1def471b2b99da31018275 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:27.265070   17426 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 20:48:27.265101   17426 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 20:48:27.265124   17426 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 20:48:27.265145   17426 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 20:48:27.265681   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 20:48:27.295419   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 20:48:27.327322   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 20:48:27.354123   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 20:48:27.380181   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 20:48:27.409469   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 20:48:27.434423   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 20:48:27.459334   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 20:48:27.486398   17426 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 20:48:27.514949   17426 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 20:48:27.535057   17426 ssh_runner.go:195] Run: openssl version
	I0920 20:48:27.541556   17426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 20:48:27.552841   17426 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:27.557488   17426 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:27.557542   17426 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:48:27.563591   17426 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 20:48:27.574585   17426 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 20:48:27.578724   17426 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 20:48:27.578767   17426 kubeadm.go:392] StartCluster: {Name:addons-628754 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-628754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:48:27.578828   17426 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 20:48:27.578888   17426 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 20:48:27.614372   17426 cri.go:89] found id: ""
	I0920 20:48:27.614428   17426 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 20:48:27.624616   17426 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 20:48:27.634583   17426 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 20:48:27.644397   17426 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 20:48:27.644424   17426 kubeadm.go:157] found existing configuration files:
	
	I0920 20:48:27.644475   17426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 20:48:27.654186   17426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 20:48:27.654252   17426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 20:48:27.663878   17426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 20:48:27.673440   17426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 20:48:27.673505   17426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 20:48:27.683173   17426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 20:48:27.692642   17426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 20:48:27.692697   17426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 20:48:27.703397   17426 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 20:48:27.713535   17426 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 20:48:27.713604   17426 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 20:48:27.723779   17426 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 20:48:27.782861   17426 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 20:48:27.782945   17426 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 20:48:27.906000   17426 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 20:48:27.906163   17426 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 20:48:27.906287   17426 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 20:48:27.919732   17426 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 20:48:27.997360   17426 out.go:235]   - Generating certificates and keys ...
	I0920 20:48:27.997466   17426 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 20:48:27.997553   17426 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 20:48:28.094857   17426 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 20:48:28.239479   17426 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 20:48:28.395927   17426 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 20:48:28.720325   17426 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 20:48:28.850828   17426 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 20:48:28.851063   17426 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-628754 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	I0920 20:48:28.998676   17426 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 20:48:28.998936   17426 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-628754 localhost] and IPs [192.168.39.204 127.0.0.1 ::1]
	I0920 20:48:29.508367   17426 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 20:48:29.566861   17426 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 20:48:29.658573   17426 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 20:48:29.658676   17426 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 20:48:30.255811   17426 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 20:48:30.388840   17426 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 20:48:30.636127   17426 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 20:48:30.741849   17426 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 20:48:30.809198   17426 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 20:48:30.809704   17426 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 20:48:30.812136   17426 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 20:48:30.814442   17426 out.go:235]   - Booting up control plane ...
	I0920 20:48:30.814560   17426 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 20:48:30.814659   17426 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 20:48:30.814748   17426 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 20:48:30.829867   17426 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 20:48:30.836797   17426 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 20:48:30.837005   17426 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 20:48:30.959791   17426 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 20:48:30.959951   17426 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 20:48:31.461582   17426 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.116569ms
	I0920 20:48:31.461681   17426 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 20:48:36.962881   17426 kubeadm.go:310] [api-check] The API server is healthy after 5.50286333s
	I0920 20:48:36.984905   17426 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 20:48:37.018104   17426 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 20:48:37.059205   17426 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 20:48:37.059469   17426 kubeadm.go:310] [mark-control-plane] Marking the node addons-628754 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 20:48:37.076513   17426 kubeadm.go:310] [bootstrap-token] Using token: bdg12k.viyp3bffzs7386tv
	I0920 20:48:37.078129   17426 out.go:235]   - Configuring RBAC rules ...
	I0920 20:48:37.078263   17426 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 20:48:37.084820   17426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 20:48:37.097641   17426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 20:48:37.105580   17426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 20:48:37.114381   17426 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 20:48:37.118116   17426 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 20:48:37.369422   17426 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 20:48:37.806370   17426 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 20:48:38.367972   17426 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 20:48:38.369871   17426 kubeadm.go:310] 
	I0920 20:48:38.369946   17426 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 20:48:38.369957   17426 kubeadm.go:310] 
	I0920 20:48:38.370072   17426 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 20:48:38.370082   17426 kubeadm.go:310] 
	I0920 20:48:38.370118   17426 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 20:48:38.370196   17426 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 20:48:38.370272   17426 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 20:48:38.370283   17426 kubeadm.go:310] 
	I0920 20:48:38.370376   17426 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 20:48:38.370400   17426 kubeadm.go:310] 
	I0920 20:48:38.370460   17426 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 20:48:38.370469   17426 kubeadm.go:310] 
	I0920 20:48:38.370530   17426 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 20:48:38.370630   17426 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 20:48:38.370742   17426 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 20:48:38.370757   17426 kubeadm.go:310] 
	I0920 20:48:38.370878   17426 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 20:48:38.370978   17426 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 20:48:38.370989   17426 kubeadm.go:310] 
	I0920 20:48:38.371071   17426 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token bdg12k.viyp3bffzs7386tv \
	I0920 20:48:38.371162   17426 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 20:48:38.371182   17426 kubeadm.go:310] 	--control-plane 
	I0920 20:48:38.371188   17426 kubeadm.go:310] 
	I0920 20:48:38.371275   17426 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 20:48:38.371291   17426 kubeadm.go:310] 
	I0920 20:48:38.371414   17426 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token bdg12k.viyp3bffzs7386tv \
	I0920 20:48:38.371569   17426 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 20:48:38.373923   17426 kubeadm.go:310] W0920 20:48:27.749585     811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 20:48:38.374277   17426 kubeadm.go:310] W0920 20:48:27.750640     811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 20:48:38.374448   17426 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 20:48:38.374477   17426 cni.go:84] Creating CNI manager for ""
	I0920 20:48:38.374486   17426 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 20:48:38.376506   17426 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 20:48:38.377736   17426 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 20:48:38.391254   17426 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 20:48:38.410759   17426 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 20:48:38.410828   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:38.410916   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-628754 minikube.k8s.io/updated_at=2024_09_20T20_48_38_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=addons-628754 minikube.k8s.io/primary=true
	I0920 20:48:38.438031   17426 ops.go:34] apiserver oom_adj: -16
	I0920 20:48:38.537631   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:39.038754   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:39.537782   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:40.038712   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:40.538203   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:41.038038   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:41.538335   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:42.038436   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:42.537928   17426 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 20:48:42.637542   17426 kubeadm.go:1113] duration metric: took 4.226768279s to wait for elevateKubeSystemPrivileges
	I0920 20:48:42.637585   17426 kubeadm.go:394] duration metric: took 15.05882001s to StartCluster
	I0920 20:48:42.637611   17426 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:42.637771   17426 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 20:48:42.638195   17426 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:48:42.638402   17426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 20:48:42.638407   17426 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.204 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 20:48:42.638474   17426 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 20:48:42.638588   17426 addons.go:69] Setting yakd=true in profile "addons-628754"
	I0920 20:48:42.638602   17426 config.go:182] Loaded profile config "addons-628754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 20:48:42.638606   17426 addons.go:69] Setting inspektor-gadget=true in profile "addons-628754"
	I0920 20:48:42.638622   17426 addons.go:69] Setting volcano=true in profile "addons-628754"
	I0920 20:48:42.638632   17426 addons.go:234] Setting addon volcano=true in "addons-628754"
	I0920 20:48:42.638613   17426 addons.go:234] Setting addon yakd=true in "addons-628754"
	I0920 20:48:42.638660   17426 addons.go:69] Setting metrics-server=true in profile "addons-628754"
	I0920 20:48:42.638668   17426 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-628754"
	I0920 20:48:42.638682   17426 addons.go:69] Setting gcp-auth=true in profile "addons-628754"
	I0920 20:48:42.638663   17426 addons.go:69] Setting cloud-spanner=true in profile "addons-628754"
	I0920 20:48:42.638683   17426 addons.go:69] Setting volumesnapshots=true in profile "addons-628754"
	I0920 20:48:42.638694   17426 addons.go:69] Setting ingress=true in profile "addons-628754"
	I0920 20:48:42.638704   17426 addons.go:234] Setting addon cloud-spanner=true in "addons-628754"
	I0920 20:48:42.638706   17426 addons.go:234] Setting addon volumesnapshots=true in "addons-628754"
	I0920 20:48:42.638709   17426 addons.go:234] Setting addon ingress=true in "addons-628754"
	I0920 20:48:42.638710   17426 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-628754"
	I0920 20:48:42.638712   17426 addons.go:69] Setting default-storageclass=true in profile "addons-628754"
	I0920 20:48:42.638726   17426 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-628754"
	I0920 20:48:42.638731   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638732   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638739   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638747   17426 addons.go:69] Setting ingress-dns=true in profile "addons-628754"
	I0920 20:48:42.638759   17426 addons.go:234] Setting addon ingress-dns=true in "addons-628754"
	I0920 20:48:42.638783   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638849   17426 addons.go:69] Setting registry=true in profile "addons-628754"
	I0920 20:48:42.638879   17426 addons.go:234] Setting addon registry=true in "addons-628754"
	I0920 20:48:42.638907   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638687   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638706   17426 mustload.go:65] Loading cluster: addons-628754
	I0920 20:48:42.639189   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639194   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639204   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.638669   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.639211   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639222   17426 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-628754"
	I0920 20:48:42.639215   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639229   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639234   17426 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-628754"
	I0920 20:48:42.639239   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639259   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638657   17426 addons.go:234] Setting addon inspektor-gadget=true in "addons-628754"
	I0920 20:48:42.638623   17426 addons.go:69] Setting storage-provisioner=true in profile "addons-628754"
	I0920 20:48:42.639295   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639303   17426 addons.go:234] Setting addon storage-provisioner=true in "addons-628754"
	I0920 20:48:42.639310   17426 config.go:182] Loaded profile config "addons-628754": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 20:48:42.639322   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639328   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638618   17426 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-628754"
	I0920 20:48:42.639348   17426 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-628754"
	I0920 20:48:42.639357   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.638741   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.638687   17426 addons.go:234] Setting addon metrics-server=true in "addons-628754"
	I0920 20:48:42.639327   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639559   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639635   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639647   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639654   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639207   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639687   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.639669   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639692   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639689   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639712   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639774   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639778   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639792   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639796   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.639925   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.639957   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.640112   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.640835   17426 out.go:177] * Verifying Kubernetes components...
	I0920 20:48:42.642673   17426 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:48:42.660941   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42189
	I0920 20:48:42.667687   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35235
	I0920 20:48:42.667692   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.667839   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.678558   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.678612   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.678621   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.678756   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.679309   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.679336   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.679311   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.679408   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.679771   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.680337   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.680363   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.680576   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.690668   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34941
	I0920 20:48:42.691311   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.691872   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.691890   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.692290   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.692477   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.694008   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35761
	I0920 20:48:42.697482   17426 addons.go:234] Setting addon default-storageclass=true in "addons-628754"
	I0920 20:48:42.697526   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.697886   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.697921   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.699272   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.699624   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.699675   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.699924   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.699947   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.700602   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.700795   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.701070   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38403
	I0920 20:48:42.701245   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35153
	I0920 20:48:42.701882   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.702429   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.702446   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.702516   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41419
	I0920 20:48:42.702666   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.703662   17426 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-628754"
	I0920 20:48:42.703696   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.704048   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.704078   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.704398   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.704413   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.705415   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.705489   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.705551   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.705944   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.706011   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.706556   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.706575   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.707457   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.707477   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.708190   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.708812   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.708852   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.709335   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46703
	I0920 20:48:42.709840   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.710499   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.710518   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.710878   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.711395   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.711432   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.717610   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39885
	I0920 20:48:42.718243   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.718856   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.718894   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.719298   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.719840   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.719876   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.725177   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I0920 20:48:42.725418   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I0920 20:48:42.725937   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.726018   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.726549   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.726565   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.726909   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.727514   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.727549   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.727836   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.727849   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.730675   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36279
	I0920 20:48:42.730906   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
	I0920 20:48:42.731307   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.731389   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.731827   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.731849   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.732193   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.732707   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.732748   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.735536   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.735557   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.735557   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.736097   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.736137   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.736396   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.736991   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.737017   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.744129   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43195
	I0920 20:48:42.744671   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.747553   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42389
	I0920 20:48:42.748011   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.749026   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.749052   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.749449   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.750408   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39107
	I0920 20:48:42.750584   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.750999   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.751611   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.751627   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.751764   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.751776   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.751844   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46523
	I0920 20:48:42.752245   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.752314   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.752542   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.752592   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.753503   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44695
	I0920 20:48:42.753974   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.754124   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32933
	I0920 20:48:42.754789   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.754806   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.754922   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.754954   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.754979   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.755347   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:42.755748   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.755778   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.756451   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.756504   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.756710   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.756809   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37715
	I0920 20:48:42.756911   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.756956   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.756972   17426 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 20:48:42.757147   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.757160   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.757573   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.757601   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.757786   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.757848   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.757957   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.758203   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.758567   17426 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 20:48:42.758586   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 20:48:42.758609   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.759930   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.759949   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.760329   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.760522   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.762149   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0920 20:48:42.763177   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.763632   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.764071   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.764089   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.764362   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.764542   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.764721   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.764829   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.764982   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 20:48:42.765250   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.765505   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:42.765517   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:42.766293   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 20:48:42.766312   17426 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 20:48:42.766331   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.767282   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:42.767602   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:42.767617   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:42.767629   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:42.767635   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:42.768023   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.768620   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.768636   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.769033   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.769606   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:42.769647   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:42.770046   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.770055   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.770081   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.770105   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.770202   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.770375   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.770438   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:42.770446   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	W0920 20:48:42.770526   17426 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0920 20:48:42.770669   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.772010   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.773576   17426 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 20:48:42.774641   17426 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 20:48:42.774658   17426 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 20:48:42.774678   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.775024   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45223
	I0920 20:48:42.775556   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.776138   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.776153   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.777044   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.777261   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.777562   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38529
	I0920 20:48:42.777798   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.778169   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.778272   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.778289   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.779104   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.779270   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.779278   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.779293   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.779465   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.779657   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.779942   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.780180   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.781516   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45031
	I0920 20:48:42.781890   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.781970   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.782665   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.784310   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46705
	I0920 20:48:42.784396   17426 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 20:48:42.784455   17426 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 20:48:42.784798   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.785170   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39331
	I0920 20:48:42.785585   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.785870   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.785888   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.786003   17426 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 20:48:42.786021   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 20:48:42.786040   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.786160   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.786170   17426 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 20:48:42.786174   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.786179   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 20:48:42.786192   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.786754   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.786805   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.786848   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.786871   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.787069   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.787456   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.787704   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.787764   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.788455   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I0920 20:48:42.788995   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.789603   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.789631   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.790117   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.790898   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.790930   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.790970   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.791037   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.791058   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.791396   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.791464   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.791556   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.791750   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.791781   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.791892   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.792728   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.792749   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.792788   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.792821   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.793062   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.793246   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.793367   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.794000   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 20:48:42.794771   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.795365   17426 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 20:48:42.795406   17426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 20:48:42.796390   17426 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 20:48:42.797256   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33525
	I0920 20:48:42.797777   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 20:48:42.797865   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 20:48:42.797883   17426 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 20:48:42.797902   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.799447   17426 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 20:48:42.799589   17426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:42.800122   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.801616   17426 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 20:48:42.801632   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 20:48:42.801653   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.801713   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 20:48:42.802118   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.802173   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.802806   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0920 20:48:42.803029   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.803221   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I0920 20:48:42.803404   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.803411   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.803584   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43141
	I0920 20:48:42.803663   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.803723   17426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:42.803906   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.803922   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.804042   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.804056   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.804059   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.804245   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.804521   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.804585   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.804639   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.804782   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.804793   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.804844   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.804970   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 20:48:42.805144   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.805349   17426 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 20:48:42.805330   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.805367   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 20:48:42.805386   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.805528   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.805581   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.805711   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.805727   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.806112   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.806210   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.806321   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.806672   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.806695   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.806377   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.806874   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.807029   17426 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 20:48:42.807167   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.807229   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.807513   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.808311   17426 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:48:42.808334   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 20:48:42.808352   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.809310   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 20:48:42.811107   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 20:48:42.811156   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.811412   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.811454   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.811479   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.811499   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.811515   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.812860   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.812904   17426 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 20:48:42.813048   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42159
	I0920 20:48:42.813072   17426 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 20:48:42.813465   17426 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 20:48:42.813488   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.813075   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.813656   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.814613   17426 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 20:48:42.814632   17426 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 20:48:42.814649   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.814848   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:42.815274   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 20:48:42.815352   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:42.815377   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:42.815679   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.815732   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:42.815942   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:42.816341   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.816374   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.816631   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.816813   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.816997   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.817143   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.817549   17426 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 20:48:42.818037   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:42.818075   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.818471   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.818494   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.818631   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.818744   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 20:48:42.818758   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 20:48:42.818767   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.818776   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.818906   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.819052   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.819435   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.819491   17426 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 20:48:42.819891   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.819917   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.820019   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.820163   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.820412   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.820530   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.821660   17426 out.go:177]   - Using image docker.io/busybox:stable
	I0920 20:48:42.821707   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.822065   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.822084   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.822234   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.822379   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.822499   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.822604   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:42.822845   17426 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 20:48:42.822864   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 20:48:42.822891   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:42.825409   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.826320   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:42.826340   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:42.826490   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:42.826650   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:42.826791   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:42.826997   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:43.153916   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 20:48:43.168073   17426 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 20:48:43.168113   17426 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 20:48:43.291676   17426 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 20:48:43.291703   17426 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 20:48:43.315362   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 20:48:43.315391   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 20:48:43.316712   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 20:48:43.334248   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:48:43.348744   17426 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 20:48:43.348770   17426 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 20:48:43.355224   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 20:48:43.380424   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 20:48:43.380456   17426 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 20:48:43.394590   17426 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 20:48:43.394621   17426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 20:48:43.398170   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 20:48:43.398196   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 20:48:43.428507   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 20:48:43.436406   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 20:48:43.461703   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 20:48:43.498907   17426 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 20:48:43.498930   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 20:48:43.574065   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 20:48:43.574091   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 20:48:43.575337   17426 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 20:48:43.575356   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 20:48:43.577797   17426 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 20:48:43.577814   17426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 20:48:43.633050   17426 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 20:48:43.633074   17426 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 20:48:43.634773   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 20:48:43.634794   17426 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 20:48:43.724101   17426 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 20:48:43.724129   17426 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 20:48:43.756985   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 20:48:43.757016   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 20:48:43.773054   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 20:48:43.788001   17426 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 20:48:43.788031   17426 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 20:48:43.885553   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 20:48:43.885575   17426 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 20:48:43.888832   17426 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 20:48:43.888852   17426 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 20:48:43.950930   17426 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:48:43.950977   17426 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 20:48:43.992296   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 20:48:43.992318   17426 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 20:48:44.002913   17426 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 20:48:44.002938   17426 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 20:48:44.086858   17426 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 20:48:44.086885   17426 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 20:48:44.087873   17426 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 20:48:44.087888   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 20:48:44.166057   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 20:48:44.166083   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 20:48:44.250028   17426 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:44.250057   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 20:48:44.341452   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:48:44.487857   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 20:48:44.491856   17426 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 20:48:44.491882   17426 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 20:48:44.577907   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 20:48:44.577939   17426 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 20:48:44.667346   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:44.955208   17426 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 20:48:44.955243   17426 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 20:48:44.999353   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 20:48:44.999376   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 20:48:45.209150   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 20:48:45.209172   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 20:48:45.262741   17426 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 20:48:45.262773   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 20:48:45.491632   17426 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 20:48:45.491665   17426 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 20:48:45.602976   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 20:48:45.772919   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 20:48:46.676907   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.522947186s)
	I0920 20:48:46.676970   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:46.676965   17426 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.50885438s)
	I0920 20:48:46.676983   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:46.676991   17426 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.508845802s)
	I0920 20:48:46.677011   17426 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 20:48:46.677250   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:46.677312   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:46.677351   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:46.677279   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:46.677361   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:46.677562   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:46.677575   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:46.677980   17426 node_ready.go:35] waiting up to 6m0s for node "addons-628754" to be "Ready" ...
	I0920 20:48:46.718417   17426 node_ready.go:49] node "addons-628754" has status "Ready":"True"
	I0920 20:48:46.718446   17426 node_ready.go:38] duration metric: took 40.437292ms for node "addons-628754" to be "Ready" ...
	I0920 20:48:46.718457   17426 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:48:46.792590   17426 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:47.364664   17426 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-628754" context rescaled to 1 replicas
	I0920 20:48:48.649858   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.333111598s)
	I0920 20:48:48.649919   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.649931   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.649939   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.315657166s)
	I0920 20:48:48.649974   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.649986   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650035   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.294786194s)
	I0920 20:48:48.650101   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.650114   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650278   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.650325   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.650395   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.650409   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.650420   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650365   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.650483   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.650494   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.650502   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650281   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.650425   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.650655   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.650663   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.650670   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.650458   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.650706   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.650736   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.650743   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.650777   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.652127   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.652143   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.652308   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:48.652326   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.652333   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:48.858610   17426 pod_ready.go:103] pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:48.896047   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:48.896071   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:48.896417   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:48.896440   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:49.836065   17426 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 20:48:49.836110   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:49.838684   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:49.839092   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:49.839121   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:49.839312   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:49.839498   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:49.839673   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:49.839822   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:50.396937   17426 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 20:48:50.507257   17426 addons.go:234] Setting addon gcp-auth=true in "addons-628754"
	I0920 20:48:50.507314   17426 host.go:66] Checking if "addons-628754" exists ...
	I0920 20:48:50.507599   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:50.507638   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:50.522912   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36851
	I0920 20:48:50.523406   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:50.523934   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:50.523954   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:50.524273   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:50.524773   17426 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 20:48:50.524796   17426 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 20:48:50.540243   17426 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36637
	I0920 20:48:50.540743   17426 main.go:141] libmachine: () Calling .GetVersion
	I0920 20:48:50.541204   17426 main.go:141] libmachine: Using API Version  1
	I0920 20:48:50.541230   17426 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 20:48:50.541545   17426 main.go:141] libmachine: () Calling .GetMachineName
	I0920 20:48:50.541736   17426 main.go:141] libmachine: (addons-628754) Calling .GetState
	I0920 20:48:50.543277   17426 main.go:141] libmachine: (addons-628754) Calling .DriverName
	I0920 20:48:50.543496   17426 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 20:48:50.543517   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHHostname
	I0920 20:48:50.546450   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:50.546948   17426 main.go:141] libmachine: (addons-628754) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:f8:c3", ip: ""} in network mk-addons-628754: {Iface:virbr1 ExpiryTime:2024-09-20 21:48:12 +0000 UTC Type:0 Mac:52:54:00:59:f8:c3 Iaid: IPaddr:192.168.39.204 Prefix:24 Hostname:addons-628754 Clientid:01:52:54:00:59:f8:c3}
	I0920 20:48:50.546998   17426 main.go:141] libmachine: (addons-628754) DBG | domain addons-628754 has defined IP address 192.168.39.204 and MAC address 52:54:00:59:f8:c3 in network mk-addons-628754
	I0920 20:48:50.547180   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHPort
	I0920 20:48:50.547367   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHKeyPath
	I0920 20:48:50.547520   17426 main.go:141] libmachine: (addons-628754) Calling .GetSSHUsername
	I0920 20:48:50.547677   17426 sshutil.go:53] new ssh client: &{IP:192.168.39.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/addons-628754/id_rsa Username:docker}
	I0920 20:48:50.940912   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.512363058s)
	I0920 20:48:50.940953   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.504514111s)
	I0920 20:48:50.940967   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.940980   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.940984   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.940994   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941024   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.479289353s)
	I0920 20:48:50.941058   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941073   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941075   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.167991227s)
	I0920 20:48:50.941094   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941110   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941144   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.599655439s)
	I0920 20:48:50.941166   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941177   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941213   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.453326238s)
	I0920 20:48:50.941232   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941249   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941456   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941460   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941475   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941490   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941495   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941499   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941502   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941512   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941515   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941523   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941532   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941532   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941532   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941541   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941543   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941549   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941549   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941555   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941518   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941567   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941568   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941576   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941584   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941590   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941569   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941603   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941610   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941613   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:50.941621   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:50.941887   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941903   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.941926   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.941935   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.941944   17426 addons.go:475] Verifying addon registry=true in "addons-628754"
	I0920 20:48:50.942201   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.942209   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.944058   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.944089   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.944096   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.944105   17426 addons.go:475] Verifying addon metrics-server=true in "addons-628754"
	I0920 20:48:50.944376   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.944388   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.944397   17426 addons.go:475] Verifying addon ingress=true in "addons-628754"
	I0920 20:48:50.944525   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.944535   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.944664   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:50.944693   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:50.944699   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:50.945140   17426 out.go:177] * Verifying registry addon...
	I0920 20:48:50.945952   17426 pod_ready.go:103] pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:50.946263   17426 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-628754 service yakd-dashboard -n yakd-dashboard
	
	I0920 20:48:50.947404   17426 out.go:177] * Verifying ingress addon...
	I0920 20:48:50.948365   17426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 20:48:50.949906   17426 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 20:48:51.024842   17426 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 20:48:51.024862   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:51.067642   17426 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 20:48:51.067672   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:51.189554   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:51.189583   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:51.189883   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:51.189903   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:51.189907   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:51.485820   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:51.486047   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:51.508824   17426 pod_ready.go:93] pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:51.508855   17426 pod_ready.go:82] duration metric: took 4.716232797s for pod "coredns-7c65d6cfc9-2s7hq" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:51.508867   17426 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:51.565560   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.898160297s)
	W0920 20:48:51.565610   17426 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 20:48:51.565618   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.96259978s)
	I0920 20:48:51.565638   17426 retry.go:31] will retry after 289.771206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 20:48:51.565665   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:51.565681   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:51.565928   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:51.565982   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:51.565995   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:51.566029   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:51.566041   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:51.566248   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:51.566263   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:51.566271   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:51.856017   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 20:48:51.967451   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:51.967668   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:52.482714   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:52.483315   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:52.577722   17426 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.034201768s)
	I0920 20:48:52.577966   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.80498182s)
	I0920 20:48:52.578008   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:52.578045   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:52.578273   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:52.578285   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:52.578293   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:52.578299   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:52.578504   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:52.578523   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:52.578537   17426 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-628754"
	I0920 20:48:52.579487   17426 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 20:48:52.580427   17426 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 20:48:52.581909   17426 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 20:48:52.582957   17426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 20:48:52.583079   17426 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 20:48:52.583105   17426 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 20:48:52.594165   17426 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 20:48:52.594198   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:52.744237   17426 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 20:48:52.744290   17426 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 20:48:52.782030   17426 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 20:48:52.782055   17426 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 20:48:52.812476   17426 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 20:48:52.953254   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:52.955366   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:53.088530   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:53.452082   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:53.454025   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:53.518057   17426 pod_ready.go:103] pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:53.592525   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:53.952666   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:53.954796   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:54.088578   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:54.137461   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.281390873s)
	I0920 20:48:54.137517   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:54.137536   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:54.137804   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:54.137827   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:54.137837   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:54.137845   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:54.137851   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:54.138105   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:54.138122   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:54.467785   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:54.488772   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:54.542327   17426 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.729811085s)
	I0920 20:48:54.542374   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:54.542390   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:54.542681   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:54.542702   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:54.542701   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:54.542722   17426 main.go:141] libmachine: Making call to close driver server
	I0920 20:48:54.542729   17426 main.go:141] libmachine: (addons-628754) Calling .Close
	I0920 20:48:54.542936   17426 main.go:141] libmachine: Successfully made call to close driver server
	I0920 20:48:54.542997   17426 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 20:48:54.543017   17426 main.go:141] libmachine: (addons-628754) DBG | Closing plugin on server side
	I0920 20:48:54.544656   17426 addons.go:475] Verifying addon gcp-auth=true in "addons-628754"
	I0920 20:48:54.546479   17426 out.go:177] * Verifying gcp-auth addon...
	I0920 20:48:54.548531   17426 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 20:48:54.573659   17426 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 20:48:54.573678   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:54.617981   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:54.952495   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:54.960055   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:55.061661   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:55.091945   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:55.458616   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:55.459309   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:55.516457   17426 pod_ready.go:98] pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.204 HostIPs:[{IP:192.168.39
.204}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 20:48:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 20:48:46 +0000 UTC,FinishedAt:2024-09-20 20:48:53 +0000 UTC,ContainerID:cri-o://463342b7c77627b06d1c4c605bf2b5c802584bb5e7490211a8eca01ab7d99a30,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://463342b7c77627b06d1c4c605bf2b5c802584bb5e7490211a8eca01ab7d99a30 Started:0xc0026dd950 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001e01120} {Name:kube-api-access-vqz7n MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001e01130}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 20:48:55.516485   17426 pod_ready.go:82] duration metric: took 4.007610514s for pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace to be "Ready" ...
	E0920 20:48:55.516499   17426 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-dv9qb" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:55 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 20:48:42 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.204 HostIPs:[{IP:192.168.39.204}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 20:48:42 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 20:48:46 +0000 UTC,FinishedAt:2024-09-20 20:48:53 +0000 UTC,ContainerID:cri-o://463342b7c77627b06d1c4c605bf2b5c802584bb5e7490211a8eca01ab7d99a30,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://463342b7c77627b06d1c4c605bf2b5c802584bb5e7490211a8eca01ab7d99a30 Started:0xc0026dd950 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001e01120} {Name:kube-api-access-vqz7n MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc001e01130}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 20:48:55.516516   17426 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.520826   17426 pod_ready.go:93] pod "etcd-addons-628754" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.520844   17426 pod_ready.go:82] duration metric: took 4.317233ms for pod "etcd-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.520857   17426 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.526814   17426 pod_ready.go:93] pod "kube-apiserver-addons-628754" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.526835   17426 pod_ready.go:82] duration metric: took 5.969545ms for pod "kube-apiserver-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.526846   17426 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.534816   17426 pod_ready.go:93] pod "kube-controller-manager-addons-628754" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.534841   17426 pod_ready.go:82] duration metric: took 7.987084ms for pod "kube-controller-manager-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.534855   17426 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-ck9m7" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.541017   17426 pod_ready.go:93] pod "kube-proxy-ck9m7" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.541043   17426 pod_ready.go:82] duration metric: took 6.180196ms for pod "kube-proxy-ck9m7" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.541055   17426 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.551619   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:55.593184   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:55.913365   17426 pod_ready.go:93] pod "kube-scheduler-addons-628754" in "kube-system" namespace has status "Ready":"True"
	I0920 20:48:55.913386   17426 pod_ready.go:82] duration metric: took 372.323338ms for pod "kube-scheduler-addons-628754" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.913396   17426 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace to be "Ready" ...
	I0920 20:48:55.955137   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:55.955137   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:56.051699   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:56.087775   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:56.452870   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:56.454722   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:56.551855   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:56.587689   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:56.953293   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:56.955457   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:57.051952   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:57.088662   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:57.452257   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:57.454267   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:57.552093   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:57.588708   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:57.922382   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:48:57.954489   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:57.955101   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:58.052836   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:58.088049   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:58.453756   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:58.454264   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:58.552172   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:58.588631   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:58.956248   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:58.956561   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:59.054141   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:59.091590   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:59.452442   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:59.457148   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:48:59.552573   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:48:59.587624   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:48:59.958079   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:48:59.959212   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:00.058727   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:00.170305   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:00.426857   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:00.452423   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:00.454540   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:00.552677   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:00.588021   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:00.952150   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:00.954190   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:01.052262   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:01.087142   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:01.452176   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:01.455052   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:01.552454   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:01.587458   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:01.954352   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:01.954591   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:02.052894   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:02.088280   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:02.453416   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:02.453710   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:02.552308   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:02.587514   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:02.919214   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:02.952365   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:02.954381   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:03.052491   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:03.087251   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:03.453233   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:03.455205   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:03.552645   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:03.588778   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:03.954362   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:03.954626   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:04.052848   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:04.088290   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:04.452525   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:04.455051   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:04.553267   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:04.587782   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:04.919682   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:04.951969   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:04.956425   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:05.052517   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:05.087781   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:05.452284   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:05.454925   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:05.553478   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:05.588581   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:05.953511   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:05.954297   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:06.051947   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:06.088183   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:06.453270   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:06.454990   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:06.553191   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:06.588354   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:06.920227   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:06.954430   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:06.954572   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:07.052484   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:07.087775   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:07.452418   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:07.454832   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:07.552476   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:07.587613   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:07.952831   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:07.954609   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:08.052698   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:08.088426   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:08.452555   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:08.453857   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:08.552734   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:08.587877   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:08.921045   17426 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"False"
	I0920 20:49:08.953289   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:08.955044   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:09.058613   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:09.088441   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:09.454326   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:09.454844   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:09.552759   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:09.587902   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:09.920620   17426 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace has status "Ready":"True"
	I0920 20:49:09.920641   17426 pod_ready.go:82] duration metric: took 14.007238973s for pod "nvidia-device-plugin-daemonset-whxlq" in "kube-system" namespace to be "Ready" ...
	I0920 20:49:09.920649   17426 pod_ready.go:39] duration metric: took 23.202177695s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:49:09.920663   17426 api_server.go:52] waiting for apiserver process to appear ...
	I0920 20:49:09.920712   17426 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:49:09.947981   17426 api_server.go:72] duration metric: took 27.309549406s to wait for apiserver process to appear ...
	I0920 20:49:09.948011   17426 api_server.go:88] waiting for apiserver healthz status ...
	I0920 20:49:09.948029   17426 api_server.go:253] Checking apiserver healthz at https://192.168.39.204:8443/healthz ...
	I0920 20:49:09.956504   17426 api_server.go:279] https://192.168.39.204:8443/healthz returned 200:
	ok
	I0920 20:49:09.958166   17426 api_server.go:141] control plane version: v1.31.1
	I0920 20:49:09.958191   17426 api_server.go:131] duration metric: took 10.173299ms to wait for apiserver health ...
	I0920 20:49:09.958199   17426 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 20:49:09.958813   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:09.961812   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:09.968812   17426 system_pods.go:59] 17 kube-system pods found
	I0920 20:49:09.968841   17426 system_pods.go:61] "coredns-7c65d6cfc9-2s7hq" [5aa81130-71b3-409f-b876-5cf2992f0699] Running
	I0920 20:49:09.968849   17426 system_pods.go:61] "csi-hostpath-attacher-0" [73d3535a-c32f-4b60-98d2-fa1c3236253d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 20:49:09.968855   17426 system_pods.go:61] "csi-hostpath-resizer-0" [01d9a848-dc32-4dda-8389-fd1d30d8e2b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 20:49:09.968863   17426 system_pods.go:61] "csi-hostpathplugin-7bhtl" [db9699fc-b3cc-4b93-8d9a-4df2159cdebc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 20:49:09.968867   17426 system_pods.go:61] "etcd-addons-628754" [fed171b7-a220-4459-bf38-9c1f660badcb] Running
	I0920 20:49:09.968871   17426 system_pods.go:61] "kube-apiserver-addons-628754" [7de28a68-441a-45ba-96a9-fe4a1100728a] Running
	I0920 20:49:09.968874   17426 system_pods.go:61] "kube-controller-manager-addons-628754" [9a7780c5-87a4-49b9-b198-ffa0f09f8040] Running
	I0920 20:49:09.968877   17426 system_pods.go:61] "kube-ingress-dns-minikube" [e534a7bd-e2bb-49ab-a14f-28256e31640f] Running
	I0920 20:49:09.968880   17426 system_pods.go:61] "kube-proxy-ck9m7" [6b23ed7f-65d0-419b-b039-6526301af08b] Running
	I0920 20:49:09.968883   17426 system_pods.go:61] "kube-scheduler-addons-628754" [81fb37ae-fc86-40a5-a2b3-874ab340a481] Running
	I0920 20:49:09.968887   17426 system_pods.go:61] "metrics-server-84c5f94fbc-jcnct" [d19e3126-c1ed-4af9-b7ab-80e6e4a06949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 20:49:09.968891   17426 system_pods.go:61] "nvidia-device-plugin-daemonset-whxlq" [08b37854-27d3-47e3-b7b8-1eece2ad9368] Running
	I0920 20:49:09.968896   17426 system_pods.go:61] "registry-66c9cd494c-6dc7n" [19bd1a7e-1f50-4be0-b5ef-365eff973102] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 20:49:09.968900   17426 system_pods.go:61] "registry-proxy-75xfb" [d9752b94-3a87-4a8d-a156-b0f4baae0181] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 20:49:09.968907   17426 system_pods.go:61] "snapshot-controller-56fcc65765-nnfk8" [94e58c25-9906-4f3f-a71d-3fce5e41f93f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:09.968912   17426 system_pods.go:61] "snapshot-controller-56fcc65765-nsvqk" [e2d4d57b-92af-4450-b3ea-1de9e09634c0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:09.968915   17426 system_pods.go:61] "storage-provisioner" [91deb30c-ef21-4a79-94e2-db81f24ab089] Running
	I0920 20:49:09.968921   17426 system_pods.go:74] duration metric: took 10.716452ms to wait for pod list to return data ...
	I0920 20:49:09.968929   17426 default_sa.go:34] waiting for default service account to be created ...
	I0920 20:49:09.971550   17426 default_sa.go:45] found service account: "default"
	I0920 20:49:09.971566   17426 default_sa.go:55] duration metric: took 2.63218ms for default service account to be created ...
	I0920 20:49:09.971573   17426 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 20:49:09.979493   17426 system_pods.go:86] 17 kube-system pods found
	I0920 20:49:09.979518   17426 system_pods.go:89] "coredns-7c65d6cfc9-2s7hq" [5aa81130-71b3-409f-b876-5cf2992f0699] Running
	I0920 20:49:09.979525   17426 system_pods.go:89] "csi-hostpath-attacher-0" [73d3535a-c32f-4b60-98d2-fa1c3236253d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 20:49:09.979532   17426 system_pods.go:89] "csi-hostpath-resizer-0" [01d9a848-dc32-4dda-8389-fd1d30d8e2b9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 20:49:09.979538   17426 system_pods.go:89] "csi-hostpathplugin-7bhtl" [db9699fc-b3cc-4b93-8d9a-4df2159cdebc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 20:49:09.979546   17426 system_pods.go:89] "etcd-addons-628754" [fed171b7-a220-4459-bf38-9c1f660badcb] Running
	I0920 20:49:09.979551   17426 system_pods.go:89] "kube-apiserver-addons-628754" [7de28a68-441a-45ba-96a9-fe4a1100728a] Running
	I0920 20:49:09.979555   17426 system_pods.go:89] "kube-controller-manager-addons-628754" [9a7780c5-87a4-49b9-b198-ffa0f09f8040] Running
	I0920 20:49:09.979559   17426 system_pods.go:89] "kube-ingress-dns-minikube" [e534a7bd-e2bb-49ab-a14f-28256e31640f] Running
	I0920 20:49:09.979562   17426 system_pods.go:89] "kube-proxy-ck9m7" [6b23ed7f-65d0-419b-b039-6526301af08b] Running
	I0920 20:49:09.979567   17426 system_pods.go:89] "kube-scheduler-addons-628754" [81fb37ae-fc86-40a5-a2b3-874ab340a481] Running
	I0920 20:49:09.979572   17426 system_pods.go:89] "metrics-server-84c5f94fbc-jcnct" [d19e3126-c1ed-4af9-b7ab-80e6e4a06949] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 20:49:09.979575   17426 system_pods.go:89] "nvidia-device-plugin-daemonset-whxlq" [08b37854-27d3-47e3-b7b8-1eece2ad9368] Running
	I0920 20:49:09.979581   17426 system_pods.go:89] "registry-66c9cd494c-6dc7n" [19bd1a7e-1f50-4be0-b5ef-365eff973102] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 20:49:09.979585   17426 system_pods.go:89] "registry-proxy-75xfb" [d9752b94-3a87-4a8d-a156-b0f4baae0181] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 20:49:09.979595   17426 system_pods.go:89] "snapshot-controller-56fcc65765-nnfk8" [94e58c25-9906-4f3f-a71d-3fce5e41f93f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:09.979600   17426 system_pods.go:89] "snapshot-controller-56fcc65765-nsvqk" [e2d4d57b-92af-4450-b3ea-1de9e09634c0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 20:49:09.979604   17426 system_pods.go:89] "storage-provisioner" [91deb30c-ef21-4a79-94e2-db81f24ab089] Running
	I0920 20:49:09.979610   17426 system_pods.go:126] duration metric: took 8.032198ms to wait for k8s-apps to be running ...
	I0920 20:49:09.979617   17426 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 20:49:09.979655   17426 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 20:49:09.995230   17426 system_svc.go:56] duration metric: took 15.606321ms WaitForService to wait for kubelet
	I0920 20:49:09.995254   17426 kubeadm.go:582] duration metric: took 27.356825963s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:49:09.995272   17426 node_conditions.go:102] verifying NodePressure condition ...
	I0920 20:49:09.999350   17426 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 20:49:09.999371   17426 node_conditions.go:123] node cpu capacity is 2
	I0920 20:49:09.999383   17426 node_conditions.go:105] duration metric: took 4.107149ms to run NodePressure ...
	I0920 20:49:09.999393   17426 start.go:241] waiting for startup goroutines ...
	I0920 20:49:10.053460   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:10.087433   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:10.454100   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:10.454346   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:10.553327   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:10.587292   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:10.953419   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:10.953751   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:11.052530   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:11.087591   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:11.452819   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:11.454012   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:11.553210   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:11.588262   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:11.954176   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:11.956729   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:12.052811   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:12.088582   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:12.452880   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:12.456247   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:12.551887   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:12.588295   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:12.954541   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:12.954986   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:13.053227   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:13.088399   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:13.453995   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:13.454604   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:13.553146   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:13.588109   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:13.952535   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:13.954871   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:14.053058   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:14.088969   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:14.453183   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:14.456277   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:14.552749   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:14.587809   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:14.952580   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:14.954915   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:15.052901   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:15.088153   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:15.452376   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:15.454450   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:15.552178   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:15.590244   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:15.952288   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:15.954887   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:16.059820   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:16.089668   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:16.453534   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:16.455331   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:16.553285   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:16.587701   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:16.952987   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:16.955195   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:17.052783   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:17.088222   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:17.453994   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:17.455805   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:17.552334   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:17.588741   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:17.951963   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:17.954638   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:18.052247   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:18.088172   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:18.453070   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:18.454583   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:18.552893   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:18.587730   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.176219   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:19.176692   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:19.177743   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.178016   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:19.452551   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:19.455179   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:19.552929   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:19.588351   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:19.952550   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:19.954625   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:20.052996   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:20.088357   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:20.452557   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:20.455753   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:20.551985   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:20.587862   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:20.954138   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:20.955659   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:21.053752   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:21.088273   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:21.452533   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:21.454412   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:21.551961   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:21.587902   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.213019   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.214276   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:22.214460   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 20:49:22.214725   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:22.455160   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:22.455472   17426 kapi.go:107] duration metric: took 31.507106479s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 20:49:22.554860   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:22.587605   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:22.959147   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:23.056319   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:23.088920   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:23.454497   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:23.552505   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:23.587969   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:23.954323   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:24.051816   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:24.088130   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:24.456274   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:24.555427   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:24.588693   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:24.955063   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:25.052728   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:25.087985   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:25.456210   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:25.552574   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:25.587488   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:25.955116   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:26.054687   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:26.087638   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:26.626115   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:26.626483   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:26.628344   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:26.954240   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:27.052307   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:27.087762   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:27.454250   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:27.552081   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:27.589592   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:27.953915   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:28.052492   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:28.088707   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:28.455291   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:28.551661   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:28.599028   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:28.956684   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:29.052143   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:29.089901   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:29.454734   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:29.555646   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:29.588017   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:29.954631   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:30.052315   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:30.089604   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:30.453995   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:30.552431   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:30.587569   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:30.955286   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:31.052545   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:31.087284   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:31.455202   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:31.551700   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:31.587298   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:31.954692   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:32.052682   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:32.093227   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:32.457841   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:32.557548   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:32.658776   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:32.954760   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:33.052426   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:33.087335   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:33.455135   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:33.552450   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:33.599114   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:33.954722   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:34.051989   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:34.087593   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:34.453655   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:34.552523   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:34.596357   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:34.955321   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:35.052589   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:35.087503   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:35.455616   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:35.552302   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:35.592075   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:35.956755   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:36.056028   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:36.157593   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:36.455543   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:36.551889   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:36.595187   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:36.955775   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:37.052828   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:37.088401   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:37.454574   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:37.552734   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:37.588058   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:37.955848   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:38.053475   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:38.087807   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:38.453781   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:38.551921   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:38.588049   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:38.956439   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:39.287727   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:39.288266   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:39.455413   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:39.556117   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:39.587698   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:39.954228   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:40.052587   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:40.088235   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:40.454685   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:40.552222   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:40.587307   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:40.965032   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:41.069064   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:41.169661   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:41.456252   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:41.551748   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:41.587763   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:41.953862   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:42.054479   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:42.090521   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:42.454259   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:42.553452   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:42.588439   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:43.003849   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:43.052220   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:43.088192   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:43.454954   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:43.552351   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:43.588323   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:43.955159   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:44.053036   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:44.090032   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:44.454513   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:44.552038   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:44.588265   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:44.955285   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:45.054003   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:45.156329   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:45.454696   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:45.553050   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:45.588047   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:45.955073   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:46.061080   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:46.159743   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:46.454017   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:46.552354   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:46.587300   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:46.954698   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:47.054845   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:47.088317   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:47.455285   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:47.552981   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:47.598550   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:47.955384   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:48.054653   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:48.088277   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:48.454990   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:48.552663   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:48.587506   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:48.954620   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:49.051691   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:49.154600   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:49.454723   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:49.553679   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:49.587997   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 20:49:49.954402   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:50.058736   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:50.090947   17426 kapi.go:107] duration metric: took 57.507986835s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 20:49:50.455270   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:50.552676   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:50.954128   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:51.051625   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:51.455727   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:51.552466   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:51.955330   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:52.052011   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:52.454834   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:52.552770   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:52.954599   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:53.055449   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:53.455037   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:53.552423   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:53.955090   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:54.053025   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:54.454346   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:54.552755   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:54.956136   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:55.053157   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:55.455144   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:55.552459   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:55.954952   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:56.054107   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:56.598482   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:56.598807   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:56.955194   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:57.054341   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:57.456555   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:57.552580   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:57.954657   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:58.052499   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:58.454614   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:58.552294   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:58.954070   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:59.054450   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:59.594930   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:49:59.595375   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:49:59.953489   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:00.052196   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:00.454898   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:00.552561   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:00.954914   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:01.055034   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:01.454793   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:01.552037   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:01.955703   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:02.054975   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:02.454984   17426 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 20:50:02.552692   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:02.961750   17426 kapi.go:107] duration metric: took 1m12.01184163s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 20:50:03.052368   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:03.552411   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:04.052493   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:04.552884   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:05.053084   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:05.555738   17426 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 20:50:06.052349   17426 kapi.go:107] duration metric: took 1m11.503815049s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 20:50:06.054082   17426 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-628754 cluster.
	I0920 20:50:06.055505   17426 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 20:50:06.056936   17426 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 20:50:06.058346   17426 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, storage-provisioner-rancher, metrics-server, nvidia-device-plugin, yakd, default-storageclass, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 20:50:06.059716   17426 addons.go:510] duration metric: took 1m23.421239851s for enable addons: enabled=[ingress-dns storage-provisioner cloud-spanner storage-provisioner-rancher metrics-server nvidia-device-plugin yakd default-storageclass inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 20:50:06.059768   17426 start.go:246] waiting for cluster config update ...
	I0920 20:50:06.059789   17426 start.go:255] writing updated cluster config ...
	I0920 20:50:06.060115   17426 ssh_runner.go:195] Run: rm -f paused
	I0920 20:50:06.111505   17426 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 20:50:06.113737   17426 out.go:177] * Done! kubectl is now configured to use "addons-628754" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.759214734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866264759189279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1e1768a-6927-4355-9d9a-8ca9b689b2b3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.760163267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f943d67-51e2-4dc9-aeda-4df480eff713 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.760313610Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f943d67-51e2-4dc9-aeda-4df480eff713 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.760880213Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af85242af0bce326a576c934564be8619dcae9a83392925678ddd20b7251794,PodSandboxId:f450cbc9b57312d32ca8d50792e5c398d2aadaafab354d9c718b4d1d1b2b58be,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726866095147005588,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8kfw7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c61956c-824f-487f-a9e3-d2343b2e2e99,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb0e1c83764c66fa21b6e9b73bfc023553eff9aad817efa3006d65594bd33f01,PodSandboxId:8aca549d11d28b3b06bbedc4d78049c9a5571762eb6c5aa3a77e8cb995d60e62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726865957019764543,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db159a66-031e-40fe-aee0-8b2be90d02fc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6e90ddc80f9e2033755b52a499b77eb67551eebdc520954077be24aee474ce,PodSandboxId:4e0a25a9933fefc6854572a5be419b41c6bc0aa70ac27fa1dd4f20cee506c738,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726865405428414027,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-cbzrf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a711db4d-67b7-4587-a77a-7b46017cc5ac,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc78aa165fe91eaab2a361445ee07d7606872b7b1ff4a213fcad68d6e1d7c19,PodSandboxId:1e9f678a16f9ca9e6fc4f0510597ac4c5de929834cfc75219d3509a402b94a93,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726865362797068214,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-jcnct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19e3126-c1ed-4af9-b7ab-80e6e4a06949,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26a1572da03a177fa626061edd39e2d827d6846cbe46a95753c4c076551c8c6,PodSandboxId:18ef2311ef6406b64746c392b7fe5cf08e963b93125431c9df624864a35cf02c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726865330459366347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91deb30c-ef21-4a79-94e2-db81f24ab089,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb35b11d6adb6b86b758475e8a1207a6315fa916ff1fc0f2b9bd29e7b35f1167,PodSandboxId:71537944126f02bd91aa07c5bcd66e9bb3f52a3052d80a1e159b2c41d909d6a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726865325
805204455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2s7hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa81130-71b3-409f-b876-5cf2992f0699,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628b1145299a31b0e420178de8fe9f2172d1d9a0ddb8206a4eac7ff3e173af06,PodSandboxId:eda25750507000c48f7feec33451823651eb62446ef78d035c6a9e07f510517d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726865323529000977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b23ed7f-65d0-419b-b039-6526301af08b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c22fecec8ece8b7118178c155707b750a52efe72682afae59b4ad2223c6bcd7,PodSandboxId:0c7ce0a0dbb52618cd5a51cefd71801bace99452ebe0edad062e6820d309dcf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726865312305509284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73ae49e512e88edd7b3d51422ae7430,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f47006040cb11650131683d4882089c05f3a0b291129ff34a9d14f769bfb479,PodSandboxId:a48bfd7c50b60e26fb0b82bce4e3fe965d70e2653e7c19acc60f125fa263c92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726865312294235910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95dd9bd765c15daee9bf57498638d246,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6308109d7cef9336f1417a99426481f325df44f67d2a5c8358c1db202ca4cc32,PodSandboxId:8a9d6431fae9243a48a73fa2f91d7db500980b57182afbb33257ba5a2fa67c78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726865312212359367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa63fad15c435d58f234d76f70de363b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d18650c60246b08f391f79236e28a5ab464c708213418917832eaaaeca31cee,PodSandboxId:7760f2e14947d769d4cf27b7ba70f433524f315b3c119d433b6ce30c0c280b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726865312172152136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42c9bd863c924c355ecd9ed32bf8b6b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f943d67-51e2-4dc9-aeda-4df480eff713 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.802410094Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bdddf633-feb2-4fba-9b44-bace7d2dd549 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.802509821Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bdddf633-feb2-4fba-9b44-bace7d2dd549 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.803563626Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=23e6e81a-f405-4bd7-ae50-19933caefd19 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.804857717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866264804768953,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=23e6e81a-f405-4bd7-ae50-19933caefd19 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.805668622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d80b484b-b6a9-4058-8ec3-f2e6f4b306e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.805726884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d80b484b-b6a9-4058-8ec3-f2e6f4b306e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.806168967Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af85242af0bce326a576c934564be8619dcae9a83392925678ddd20b7251794,PodSandboxId:f450cbc9b57312d32ca8d50792e5c398d2aadaafab354d9c718b4d1d1b2b58be,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726866095147005588,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8kfw7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c61956c-824f-487f-a9e3-d2343b2e2e99,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb0e1c83764c66fa21b6e9b73bfc023553eff9aad817efa3006d65594bd33f01,PodSandboxId:8aca549d11d28b3b06bbedc4d78049c9a5571762eb6c5aa3a77e8cb995d60e62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726865957019764543,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db159a66-031e-40fe-aee0-8b2be90d02fc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6e90ddc80f9e2033755b52a499b77eb67551eebdc520954077be24aee474ce,PodSandboxId:4e0a25a9933fefc6854572a5be419b41c6bc0aa70ac27fa1dd4f20cee506c738,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726865405428414027,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-cbzrf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a711db4d-67b7-4587-a77a-7b46017cc5ac,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc78aa165fe91eaab2a361445ee07d7606872b7b1ff4a213fcad68d6e1d7c19,PodSandboxId:1e9f678a16f9ca9e6fc4f0510597ac4c5de929834cfc75219d3509a402b94a93,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726865362797068214,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-jcnct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19e3126-c1ed-4af9-b7ab-80e6e4a06949,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26a1572da03a177fa626061edd39e2d827d6846cbe46a95753c4c076551c8c6,PodSandboxId:18ef2311ef6406b64746c392b7fe5cf08e963b93125431c9df624864a35cf02c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726865330459366347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91deb30c-ef21-4a79-94e2-db81f24ab089,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb35b11d6adb6b86b758475e8a1207a6315fa916ff1fc0f2b9bd29e7b35f1167,PodSandboxId:71537944126f02bd91aa07c5bcd66e9bb3f52a3052d80a1e159b2c41d909d6a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726865325
805204455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2s7hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa81130-71b3-409f-b876-5cf2992f0699,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628b1145299a31b0e420178de8fe9f2172d1d9a0ddb8206a4eac7ff3e173af06,PodSandboxId:eda25750507000c48f7feec33451823651eb62446ef78d035c6a9e07f510517d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726865323529000977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b23ed7f-65d0-419b-b039-6526301af08b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c22fecec8ece8b7118178c155707b750a52efe72682afae59b4ad2223c6bcd7,PodSandboxId:0c7ce0a0dbb52618cd5a51cefd71801bace99452ebe0edad062e6820d309dcf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726865312305509284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73ae49e512e88edd7b3d51422ae7430,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f47006040cb11650131683d4882089c05f3a0b291129ff34a9d14f769bfb479,PodSandboxId:a48bfd7c50b60e26fb0b82bce4e3fe965d70e2653e7c19acc60f125fa263c92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726865312294235910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95dd9bd765c15daee9bf57498638d246,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6308109d7cef9336f1417a99426481f325df44f67d2a5c8358c1db202ca4cc32,PodSandboxId:8a9d6431fae9243a48a73fa2f91d7db500980b57182afbb33257ba5a2fa67c78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726865312212359367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa63fad15c435d58f234d76f70de363b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d18650c60246b08f391f79236e28a5ab464c708213418917832eaaaeca31cee,PodSandboxId:7760f2e14947d769d4cf27b7ba70f433524f315b3c119d433b6ce30c0c280b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726865312172152136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42c9bd863c924c355ecd9ed32bf8b6b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d80b484b-b6a9-4058-8ec3-f2e6f4b306e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.846118864Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=87cc471f-220c-4c15-ab1f-b6988844c4a6 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.846214229Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=87cc471f-220c-4c15-ab1f-b6988844c4a6 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.848090778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=732bc839-2100-42c9-b19d-1471662d07a1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.850904907Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866264850734968,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=732bc839-2100-42c9-b19d-1471662d07a1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.851713269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c2ab5d8-614c-40d4-81fa-681c5a10d3fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.851838829Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c2ab5d8-614c-40d4-81fa-681c5a10d3fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.853143605Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af85242af0bce326a576c934564be8619dcae9a83392925678ddd20b7251794,PodSandboxId:f450cbc9b57312d32ca8d50792e5c398d2aadaafab354d9c718b4d1d1b2b58be,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726866095147005588,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8kfw7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c61956c-824f-487f-a9e3-d2343b2e2e99,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb0e1c83764c66fa21b6e9b73bfc023553eff9aad817efa3006d65594bd33f01,PodSandboxId:8aca549d11d28b3b06bbedc4d78049c9a5571762eb6c5aa3a77e8cb995d60e62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726865957019764543,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db159a66-031e-40fe-aee0-8b2be90d02fc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6e90ddc80f9e2033755b52a499b77eb67551eebdc520954077be24aee474ce,PodSandboxId:4e0a25a9933fefc6854572a5be419b41c6bc0aa70ac27fa1dd4f20cee506c738,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726865405428414027,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-cbzrf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a711db4d-67b7-4587-a77a-7b46017cc5ac,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc78aa165fe91eaab2a361445ee07d7606872b7b1ff4a213fcad68d6e1d7c19,PodSandboxId:1e9f678a16f9ca9e6fc4f0510597ac4c5de929834cfc75219d3509a402b94a93,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726865362797068214,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-jcnct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19e3126-c1ed-4af9-b7ab-80e6e4a06949,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26a1572da03a177fa626061edd39e2d827d6846cbe46a95753c4c076551c8c6,PodSandboxId:18ef2311ef6406b64746c392b7fe5cf08e963b93125431c9df624864a35cf02c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726865330459366347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91deb30c-ef21-4a79-94e2-db81f24ab089,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb35b11d6adb6b86b758475e8a1207a6315fa916ff1fc0f2b9bd29e7b35f1167,PodSandboxId:71537944126f02bd91aa07c5bcd66e9bb3f52a3052d80a1e159b2c41d909d6a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726865325
805204455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2s7hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa81130-71b3-409f-b876-5cf2992f0699,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628b1145299a31b0e420178de8fe9f2172d1d9a0ddb8206a4eac7ff3e173af06,PodSandboxId:eda25750507000c48f7feec33451823651eb62446ef78d035c6a9e07f510517d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726865323529000977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b23ed7f-65d0-419b-b039-6526301af08b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c22fecec8ece8b7118178c155707b750a52efe72682afae59b4ad2223c6bcd7,PodSandboxId:0c7ce0a0dbb52618cd5a51cefd71801bace99452ebe0edad062e6820d309dcf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726865312305509284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73ae49e512e88edd7b3d51422ae7430,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f47006040cb11650131683d4882089c05f3a0b291129ff34a9d14f769bfb479,PodSandboxId:a48bfd7c50b60e26fb0b82bce4e3fe965d70e2653e7c19acc60f125fa263c92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726865312294235910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95dd9bd765c15daee9bf57498638d246,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6308109d7cef9336f1417a99426481f325df44f67d2a5c8358c1db202ca4cc32,PodSandboxId:8a9d6431fae9243a48a73fa2f91d7db500980b57182afbb33257ba5a2fa67c78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726865312212359367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa63fad15c435d58f234d76f70de363b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d18650c60246b08f391f79236e28a5ab464c708213418917832eaaaeca31cee,PodSandboxId:7760f2e14947d769d4cf27b7ba70f433524f315b3c119d433b6ce30c0c280b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726865312172152136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42c9bd863c924c355ecd9ed32bf8b6b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c2ab5d8-614c-40d4-81fa-681c5a10d3fb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.897654526Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c8805be-e158-415a-85eb-82728743d5c6 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.897738041Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c8805be-e158-415a-85eb-82728743d5c6 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.899137649Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6cde735c-af3a-4341-9a64-ccc8d445010d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.900447697Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866264900418978,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6cde735c-af3a-4341-9a64-ccc8d445010d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.901332231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7d46e23d-bc32-4a70-995e-3470f45abafb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.901417661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7d46e23d-bc32-4a70-995e-3470f45abafb name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:04:24 addons-628754 crio[661]: time="2024-09-20 21:04:24.901655536Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7af85242af0bce326a576c934564be8619dcae9a83392925678ddd20b7251794,PodSandboxId:f450cbc9b57312d32ca8d50792e5c398d2aadaafab354d9c718b4d1d1b2b58be,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1726866095147005588,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-8kfw7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6c61956c-824f-487f-a9e3-d2343b2e2e99,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb0e1c83764c66fa21b6e9b73bfc023553eff9aad817efa3006d65594bd33f01,PodSandboxId:8aca549d11d28b3b06bbedc4d78049c9a5571762eb6c5aa3a77e8cb995d60e62,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9,State:CONTAINER_RUNNING,CreatedAt:1726865957019764543,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: db159a66-031e-40fe-aee0-8b2be90d02fc,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f6e90ddc80f9e2033755b52a499b77eb67551eebdc520954077be24aee474ce,PodSandboxId:4e0a25a9933fefc6854572a5be419b41c6bc0aa70ac27fa1dd4f20cee506c738,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1726865405428414027,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-89d5ffd79-cbzrf,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.
pod.uid: a711db4d-67b7-4587-a77a-7b46017cc5ac,},Annotations:map[string]string{io.kubernetes.container.hash: 91308b2f,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bc78aa165fe91eaab2a361445ee07d7606872b7b1ff4a213fcad68d6e1d7c19,PodSandboxId:1e9f678a16f9ca9e6fc4f0510597ac4c5de929834cfc75219d3509a402b94a93,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1726865362797068214,Labels:map[string]string{io.kubernetes.container.name: metr
ics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-jcnct,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d19e3126-c1ed-4af9-b7ab-80e6e4a06949,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d26a1572da03a177fa626061edd39e2d827d6846cbe46a95753c4c076551c8c6,PodSandboxId:18ef2311ef6406b64746c392b7fe5cf08e963b93125431c9df624864a35cf02c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNN
ING,CreatedAt:1726865330459366347,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91deb30c-ef21-4a79-94e2-db81f24ab089,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb35b11d6adb6b86b758475e8a1207a6315fa916ff1fc0f2b9bd29e7b35f1167,PodSandboxId:71537944126f02bd91aa07c5bcd66e9bb3f52a3052d80a1e159b2c41d909d6a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726865325
805204455,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-2s7hq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5aa81130-71b3-409f-b876-5cf2992f0699,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:628b1145299a31b0e420178de8fe9f2172d1d9a0ddb8206a4eac7ff3e173af06,PodSandboxId:eda25750507000c48f7feec33451823651eb62446ef78d035c6a9e07f510517d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf0
6a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726865323529000977,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ck9m7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b23ed7f-65d0-419b-b039-6526301af08b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c22fecec8ece8b7118178c155707b750a52efe72682afae59b4ad2223c6bcd7,PodSandboxId:0c7ce0a0dbb52618cd5a51cefd71801bace99452ebe0edad062e6820d309dcf5,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string
{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726865312305509284,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b73ae49e512e88edd7b3d51422ae7430,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f47006040cb11650131683d4882089c05f3a0b291129ff34a9d14f769bfb479,PodSandboxId:a48bfd7c50b60e26fb0b82bce4e3fe965d70e2653e7c19acc60f125fa263c92c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler
:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726865312294235910,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 95dd9bd765c15daee9bf57498638d246,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6308109d7cef9336f1417a99426481f325df44f67d2a5c8358c1db202ca4cc32,PodSandboxId:8a9d6431fae9243a48a73fa2f91d7db500980b57182afbb33257ba5a2fa67c78,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Imag
eRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726865312212359367,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa63fad15c435d58f234d76f70de363b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d18650c60246b08f391f79236e28a5ab464c708213418917832eaaaeca31cee,PodSandboxId:7760f2e14947d769d4cf27b7ba70f433524f315b3c119d433b6ce30c0c280b4e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},Ima
geRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726865312172152136,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-628754,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e42c9bd863c924c355ecd9ed32bf8b6b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7d46e23d-bc32-4a70-995e-3470f45abafb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7af85242af0bc       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   f450cbc9b5731       hello-world-app-55bf9c44b4-8kfw7
	bb0e1c83764c6       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         5 minutes ago       Running             nginx                     0                   8aca549d11d28       nginx
	0f6e90ddc80f9       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            14 minutes ago      Running             gcp-auth                  0                   4e0a25a9933fe       gcp-auth-89d5ffd79-cbzrf
	7bc78aa165fe9       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   15 minutes ago      Running             metrics-server            0                   1e9f678a16f9c       metrics-server-84c5f94fbc-jcnct
	d26a1572da03a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   18ef2311ef640       storage-provisioner
	eb35b11d6adb6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   71537944126f0       coredns-7c65d6cfc9-2s7hq
	628b1145299a3       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago      Running             kube-proxy                0                   eda2575050700       kube-proxy-ck9m7
	8c22fecec8ece       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago      Running             etcd                      0                   0c7ce0a0dbb52       etcd-addons-628754
	9f47006040cb1       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago      Running             kube-scheduler            0                   a48bfd7c50b60       kube-scheduler-addons-628754
	6308109d7cef9       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago      Running             kube-controller-manager   0                   8a9d6431fae92       kube-controller-manager-addons-628754
	7d18650c60246       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago      Running             kube-apiserver            0                   7760f2e14947d       kube-apiserver-addons-628754
	
	
	==> coredns [eb35b11d6adb6b86b758475e8a1207a6315fa916ff1fc0f2b9bd29e7b35f1167] <==
	[INFO] 127.0.0.1:39672 - 18092 "HINFO IN 1734730656473124985.126132105188895419. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.041138577s
	[INFO] 10.244.0.7:39855 - 37406 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000458004s
	[INFO] 10.244.0.7:39855 - 49690 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000827871s
	[INFO] 10.244.0.7:45470 - 13293 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000192818s
	[INFO] 10.244.0.7:45470 - 8936 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000320178s
	[INFO] 10.244.0.7:43873 - 16971 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000110387s
	[INFO] 10.244.0.7:43873 - 16457 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000096404s
	[INFO] 10.244.0.7:44014 - 47350 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149624s
	[INFO] 10.244.0.7:44014 - 3832 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000332831s
	[INFO] 10.244.0.7:47949 - 2333 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000070958s
	[INFO] 10.244.0.7:47949 - 57368 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000030483s
	[INFO] 10.244.0.7:33217 - 42646 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000045557s
	[INFO] 10.244.0.7:33217 - 51860 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00021064s
	[INFO] 10.244.0.7:33663 - 16494 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081574s
	[INFO] 10.244.0.7:33663 - 7952 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000109875s
	[INFO] 10.244.0.7:41599 - 53504 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00009227s
	[INFO] 10.244.0.7:41599 - 40450 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000256907s
	[INFO] 10.244.0.21:38291 - 34561 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000696839s
	[INFO] 10.244.0.21:37047 - 19984 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000286736s
	[INFO] 10.244.0.21:48670 - 59512 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000103607s
	[INFO] 10.244.0.21:42040 - 13282 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000079655s
	[INFO] 10.244.0.21:49121 - 13348 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000082438s
	[INFO] 10.244.0.21:43215 - 5085 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000080693s
	[INFO] 10.244.0.21:57332 - 41951 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000645949s
	[INFO] 10.244.0.21:51212 - 53235 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001099281s
	
	
	==> describe nodes <==
	Name:               addons-628754
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-628754
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=addons-628754
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T20_48_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-628754
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 20:48:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-628754
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:04:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:01:43 +0000   Fri, 20 Sep 2024 20:48:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:01:43 +0000   Fri, 20 Sep 2024 20:48:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:01:43 +0000   Fri, 20 Sep 2024 20:48:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:01:43 +0000   Fri, 20 Sep 2024 20:48:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.204
	  Hostname:    addons-628754
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912788Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7c9184a6c7a4f059f8bfecc27a38bef
	  System UUID:                c7c9184a-6c7a-4f05-9f8b-fecc27a38bef
	  Boot ID:                    cf51e733-8187-478e-b6b3-2f37c0eadb57
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  default                     hello-world-app-55bf9c44b4-8kfw7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  gcp-auth                    gcp-auth-89d5ffd79-cbzrf                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 coredns-7c65d6cfc9-2s7hq                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     15m
	  kube-system                 etcd-addons-628754                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         15m
	  kube-system                 kube-apiserver-addons-628754             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-628754    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-ck9m7                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-628754             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node addons-628754 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node addons-628754 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node addons-628754 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node addons-628754 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node addons-628754 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node addons-628754 status is now: NodeHasSufficientPID
	  Normal  NodeReady                15m                kubelet          Node addons-628754 status is now: NodeReady
	  Normal  RegisteredNode           15m                node-controller  Node addons-628754 event: Registered Node addons-628754 in Controller
	
	
	==> dmesg <==
	[  +5.067774] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.008250] kauditd_printk_skb: 111 callbacks suppressed
	[Sep20 20:49] kauditd_printk_skb: 2 callbacks suppressed
	[ +12.153986] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.168012] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.152868] kauditd_printk_skb: 23 callbacks suppressed
	[  +5.310325] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.091709] kauditd_printk_skb: 39 callbacks suppressed
	[  +6.646153] kauditd_printk_skb: 6 callbacks suppressed
	[Sep20 20:50] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.211672] kauditd_printk_skb: 54 callbacks suppressed
	[  +9.022690] kauditd_printk_skb: 4 callbacks suppressed
	[Sep20 20:51] kauditd_printk_skb: 30 callbacks suppressed
	[Sep20 20:52] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 20:55] kauditd_printk_skb: 28 callbacks suppressed
	[Sep20 20:58] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.397806] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.278210] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.586919] kauditd_printk_skb: 56 callbacks suppressed
	[  +8.269165] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.002687] kauditd_printk_skb: 2 callbacks suppressed
	[Sep20 20:59] kauditd_printk_skb: 9 callbacks suppressed
	[  +8.497306] kauditd_printk_skb: 54 callbacks suppressed
	[Sep20 21:01] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.371167] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [8c22fecec8ece8b7118178c155707b750a52efe72682afae59b4ad2223c6bcd7] <==
	{"level":"info","ts":"2024-09-20T20:49:59.565464Z","caller":"traceutil/trace.go:171","msg":"trace[581102703] transaction","detail":"{read_only:false; response_revision:1135; number_of_response:1; }","duration":"184.808174ms","start":"2024-09-20T20:49:59.380645Z","end":"2024-09-20T20:49:59.565453Z","steps":["trace[581102703] 'process raft request'  (duration: 184.43691ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:49:59.565524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.475086ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T20:49:59.566221Z","caller":"traceutil/trace.go:171","msg":"trace[268053702] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1135; }","duration":"138.182795ms","start":"2024-09-20T20:49:59.428029Z","end":"2024-09-20T20:49:59.566212Z","steps":["trace[268053702] 'agreement among raft nodes before linearized reading'  (duration: 137.45984ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:58:17.859241Z","caller":"traceutil/trace.go:171","msg":"trace[158193461] linearizableReadLoop","detail":"{readStateIndex:2108; appliedIndex:2107; }","duration":"270.251065ms","start":"2024-09-20T20:58:17.588968Z","end":"2024-09-20T20:58:17.859219Z","steps":["trace[158193461] 'read index received'  (duration: 270.029869ms)","trace[158193461] 'applied index is now lower than readState.Index'  (duration: 220.692µs)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T20:58:17.859567Z","caller":"traceutil/trace.go:171","msg":"trace[251460462] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1971; }","duration":"294.826658ms","start":"2024-09-20T20:58:17.564730Z","end":"2024-09-20T20:58:17.859556Z","steps":["trace[251460462] 'process raft request'  (duration: 294.306874ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:17.859895Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"216.341119ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-09-20T20:58:17.860684Z","caller":"traceutil/trace.go:171","msg":"trace[2086911049] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1971; }","duration":"217.238418ms","start":"2024-09-20T20:58:17.643435Z","end":"2024-09-20T20:58:17.860673Z","steps":["trace[2086911049] 'agreement among raft nodes before linearized reading'  (duration: 216.223347ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:17.859984Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.032072ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/yakd-dashboard\" ","response":"range_response_count:1 size:883"}
	{"level":"info","ts":"2024-09-20T20:58:17.861132Z","caller":"traceutil/trace.go:171","msg":"trace[259109003] range","detail":"{range_begin:/registry/namespaces/yakd-dashboard; range_end:; response_count:1; response_revision:1971; }","duration":"272.17394ms","start":"2024-09-20T20:58:17.588947Z","end":"2024-09-20T20:58:17.861121Z","steps":["trace[259109003] 'agreement among raft nodes before linearized reading'  (duration: 270.978028ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:17.860226Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"149.24562ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T20:58:17.861502Z","caller":"traceutil/trace.go:171","msg":"trace[1923698082] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1971; }","duration":"150.517679ms","start":"2024-09-20T20:58:17.710976Z","end":"2024-09-20T20:58:17.861494Z","steps":["trace[1923698082] 'agreement among raft nodes before linearized reading'  (duration: 149.237948ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:17.860256Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.092261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T20:58:17.864904Z","caller":"traceutil/trace.go:171","msg":"trace[1341777283] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1971; }","duration":"166.743439ms","start":"2024-09-20T20:58:17.698148Z","end":"2024-09-20T20:58:17.864891Z","steps":["trace[1341777283] 'agreement among raft nodes before linearized reading'  (duration: 162.088037ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:58:33.349055Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1548}
	{"level":"info","ts":"2024-09-20T20:58:33.390006Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1548,"took":"40.358531ms","hash":3091395663,"current-db-size-bytes":7442432,"current-db-size":"7.4 MB","current-db-size-in-use-bytes":4321280,"current-db-size-in-use":"4.3 MB"}
	{"level":"info","ts":"2024-09-20T20:58:33.390117Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3091395663,"revision":1548,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T20:58:40.291216Z","caller":"traceutil/trace.go:171","msg":"trace[1233032017] transaction","detail":"{read_only:false; response_revision:2184; number_of_response:1; }","duration":"102.579104ms","start":"2024-09-20T20:58:40.188615Z","end":"2024-09-20T20:58:40.291194Z","steps":["trace[1233032017] 'process raft request'  (duration: 102.479076ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:58:41.625494Z","caller":"traceutil/trace.go:171","msg":"trace[1492248117] transaction","detail":"{read_only:false; response_revision:2186; number_of_response:1; }","duration":"359.734741ms","start":"2024-09-20T20:58:41.265745Z","end":"2024-09-20T20:58:41.625480Z","steps":["trace[1492248117] 'process raft request'  (duration: 359.639083ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:41.625688Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T20:58:41.265729Z","time spent":"359.847904ms","remote":"127.0.0.1:36536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:2163 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:450 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-09-20T20:58:41.635221Z","caller":"traceutil/trace.go:171","msg":"trace[2009681851] transaction","detail":"{read_only:false; response_revision:2188; number_of_response:1; }","duration":"287.660276ms","start":"2024-09-20T20:58:41.347543Z","end":"2024-09-20T20:58:41.635203Z","steps":["trace[2009681851] 'process raft request'  (duration: 287.627001ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T20:58:41.635333Z","caller":"traceutil/trace.go:171","msg":"trace[230871389] transaction","detail":"{read_only:false; response_revision:2187; number_of_response:1; }","duration":"337.317765ms","start":"2024-09-20T20:58:41.298010Z","end":"2024-09-20T20:58:41.635327Z","steps":["trace[230871389] 'process raft request'  (duration: 337.000319ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T20:58:41.635417Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-20T20:58:41.297992Z","time spent":"337.357119ms","remote":"127.0.0.1:36536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-628754\" mod_revision:2121 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-628754\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-628754\" > >"}
	{"level":"info","ts":"2024-09-20T21:03:33.356674Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2160}
	{"level":"info","ts":"2024-09-20T21:03:33.379614Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2160,"took":"22.09938ms","hash":3307747455,"current-db-size-bytes":7442432,"current-db-size":"7.4 MB","current-db-size-in-use-bytes":4943872,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-20T21:03:33.379685Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3307747455,"revision":2160,"compact-revision":1548}
	
	
	==> gcp-auth [0f6e90ddc80f9e2033755b52a499b77eb67551eebdc520954077be24aee474ce] <==
	2024/09/20 20:50:09 Ready to write response ...
	2024/09/20 20:50:09 Ready to marshal response ...
	2024/09/20 20:50:09 Ready to write response ...
	2024/09/20 20:58:12 Ready to marshal response ...
	2024/09/20 20:58:12 Ready to write response ...
	2024/09/20 20:58:12 Ready to marshal response ...
	2024/09/20 20:58:12 Ready to write response ...
	2024/09/20 20:58:12 Ready to marshal response ...
	2024/09/20 20:58:12 Ready to write response ...
	2024/09/20 20:58:18 Ready to marshal response ...
	2024/09/20 20:58:18 Ready to write response ...
	2024/09/20 20:58:18 Ready to marshal response ...
	2024/09/20 20:58:18 Ready to write response ...
	2024/09/20 20:58:22 Ready to marshal response ...
	2024/09/20 20:58:22 Ready to write response ...
	2024/09/20 20:58:26 Ready to marshal response ...
	2024/09/20 20:58:26 Ready to write response ...
	2024/09/20 20:58:36 Ready to marshal response ...
	2024/09/20 20:58:36 Ready to write response ...
	2024/09/20 20:59:08 Ready to marshal response ...
	2024/09/20 20:59:08 Ready to write response ...
	2024/09/20 20:59:14 Ready to marshal response ...
	2024/09/20 20:59:14 Ready to write response ...
	2024/09/20 21:01:33 Ready to marshal response ...
	2024/09/20 21:01:33 Ready to write response ...
	
	
	==> kernel <==
	 21:04:25 up 16 min,  0 users,  load average: 0.40, 0.48, 0.36
	Linux addons-628754 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7d18650c60246b08f391f79236e28a5ab464c708213418917832eaaaeca31cee] <==
	 > logger="UnhandledError"
	E0920 20:50:29.073774       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.246.51:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.246.51:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.246.51:443: connect: connection refused" logger="UnhandledError"
	E0920 20:50:29.080229       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.246.51:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.246.51:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.246.51:443: connect: connection refused" logger="UnhandledError"
	I0920 20:50:29.156909       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0920 20:58:12.607107       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.98.156.22"}
	E0920 20:58:42.815240       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0920 20:58:47.912557       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0920 20:59:08.742368       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 20:59:09.785503       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 20:59:14.271182       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 20:59:14.472878       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.230.48"}
	I0920 20:59:24.833550       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.833614       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 20:59:24.867777       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.868426       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 20:59:24.889977       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.890007       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 20:59:24.966388       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.966424       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 20:59:24.989650       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 20:59:24.989712       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 20:59:25.868505       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 20:59:25.990598       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 20:59:26.152378       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0920 21:01:33.882210       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.151.149"}
	
	
	==> kube-controller-manager [6308109d7cef9336f1417a99426481f325df44f67d2a5c8358c1db202ca4cc32] <==
	W0920 21:02:21.677288       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:02:21.677331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:02:21.792700       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:02:21.792762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:02:41.425075       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:02:41.425163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:02:52.065004       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:02:52.065124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:03:09.747074       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:03:09.747204       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:03:15.426226       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:03:15.426283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:03:28.239034       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:03:28.239202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:03:40.050895       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:03:40.050948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:03:48.867552       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:03:48.867597       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:04:00.980692       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:04:00.980752       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:04:11.381137       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:04:11.381202       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 21:04:20.073550       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 21:04:20.073610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 21:04:23.835303       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="9.078µs"
	
	
	==> kube-proxy [628b1145299a31b0e420178de8fe9f2172d1d9a0ddb8206a4eac7ff3e173af06] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 20:48:44.327957       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 20:48:44.350577       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.204"]
	E0920 20:48:44.350669       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 20:48:44.445024       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 20:48:44.445092       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 20:48:44.445120       1 server_linux.go:169] "Using iptables Proxier"
	I0920 20:48:44.451041       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 20:48:44.451371       1 server.go:483] "Version info" version="v1.31.1"
	I0920 20:48:44.451402       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 20:48:44.453177       1 config.go:199] "Starting service config controller"
	I0920 20:48:44.453217       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 20:48:44.453247       1 config.go:105] "Starting endpoint slice config controller"
	I0920 20:48:44.453252       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 20:48:44.454211       1 config.go:328] "Starting node config controller"
	I0920 20:48:44.454239       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 20:48:44.554025       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 20:48:44.554067       1 shared_informer.go:320] Caches are synced for service config
	I0920 20:48:44.554904       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9f47006040cb11650131683d4882089c05f3a0b291129ff34a9d14f769bfb479] <==
	W0920 20:48:35.797909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 20:48:35.797962       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:35.808611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 20:48:35.808694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:35.930646       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:35.930696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:35.931077       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 20:48:35.931116       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 20:48:35.942346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 20:48:35.942395       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.021065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:36.021114       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.033505       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 20:48:36.033567       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.050598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:36.050648       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.094192       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 20:48:36.094242       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.113042       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 20:48:36.113500       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.211462       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 20:48:36.211605       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 20:48:36.341367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 20:48:36.341480       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0920 20:48:37.917584       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 21:03:37 addons-628754 kubelet[1204]: E0920 21:03:37.735473    1204 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 21:03:37 addons-628754 kubelet[1204]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 21:03:37 addons-628754 kubelet[1204]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 21:03:37 addons-628754 kubelet[1204]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:03:37 addons-628754 kubelet[1204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:03:38 addons-628754 kubelet[1204]: E0920 21:03:38.217008    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866218216596837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:03:38 addons-628754 kubelet[1204]: E0920 21:03:38.217058    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866218216596837,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:03:40 addons-628754 kubelet[1204]: E0920 21:03:40.718086    1204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="aea8b93f-0c58-47be-a497-a87b509f0c6b"
	Sep 20 21:03:48 addons-628754 kubelet[1204]: E0920 21:03:48.221491    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866228220884671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:03:48 addons-628754 kubelet[1204]: E0920 21:03:48.221555    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866228220884671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:03:53 addons-628754 kubelet[1204]: E0920 21:03:53.717514    1204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="aea8b93f-0c58-47be-a497-a87b509f0c6b"
	Sep 20 21:03:58 addons-628754 kubelet[1204]: E0920 21:03:58.224716    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866238224376396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:03:58 addons-628754 kubelet[1204]: E0920 21:03:58.224754    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866238224376396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:04:07 addons-628754 kubelet[1204]: E0920 21:04:07.719534    1204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="aea8b93f-0c58-47be-a497-a87b509f0c6b"
	Sep 20 21:04:08 addons-628754 kubelet[1204]: E0920 21:04:08.227450    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866248227049667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:04:08 addons-628754 kubelet[1204]: E0920 21:04:08.227498    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866248227049667,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:04:18 addons-628754 kubelet[1204]: E0920 21:04:18.231073    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866258230277932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:04:18 addons-628754 kubelet[1204]: E0920 21:04:18.231104    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726866258230277932,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:559237,},InodesUsed:&UInt64Value{Value:192,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:04:22 addons-628754 kubelet[1204]: E0920 21:04:22.717221    1204 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="aea8b93f-0c58-47be-a497-a87b509f0c6b"
	Sep 20 21:04:25 addons-628754 kubelet[1204]: I0920 21:04:25.229379    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d19e3126-c1ed-4af9-b7ab-80e6e4a06949-tmp-dir\") pod \"d19e3126-c1ed-4af9-b7ab-80e6e4a06949\" (UID: \"d19e3126-c1ed-4af9-b7ab-80e6e4a06949\") "
	Sep 20 21:04:25 addons-628754 kubelet[1204]: I0920 21:04:25.229426    1204 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phb8z\" (UniqueName: \"kubernetes.io/projected/d19e3126-c1ed-4af9-b7ab-80e6e4a06949-kube-api-access-phb8z\") pod \"d19e3126-c1ed-4af9-b7ab-80e6e4a06949\" (UID: \"d19e3126-c1ed-4af9-b7ab-80e6e4a06949\") "
	Sep 20 21:04:25 addons-628754 kubelet[1204]: I0920 21:04:25.229785    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d19e3126-c1ed-4af9-b7ab-80e6e4a06949-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "d19e3126-c1ed-4af9-b7ab-80e6e4a06949" (UID: "d19e3126-c1ed-4af9-b7ab-80e6e4a06949"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 20 21:04:25 addons-628754 kubelet[1204]: I0920 21:04:25.232678    1204 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19e3126-c1ed-4af9-b7ab-80e6e4a06949-kube-api-access-phb8z" (OuterVolumeSpecName: "kube-api-access-phb8z") pod "d19e3126-c1ed-4af9-b7ab-80e6e4a06949" (UID: "d19e3126-c1ed-4af9-b7ab-80e6e4a06949"). InnerVolumeSpecName "kube-api-access-phb8z". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 21:04:25 addons-628754 kubelet[1204]: I0920 21:04:25.330034    1204 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-phb8z\" (UniqueName: \"kubernetes.io/projected/d19e3126-c1ed-4af9-b7ab-80e6e4a06949-kube-api-access-phb8z\") on node \"addons-628754\" DevicePath \"\""
	Sep 20 21:04:25 addons-628754 kubelet[1204]: I0920 21:04:25.330080    1204 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d19e3126-c1ed-4af9-b7ab-80e6e4a06949-tmp-dir\") on node \"addons-628754\" DevicePath \"\""
	
	
	==> storage-provisioner [d26a1572da03a177fa626061edd39e2d827d6846cbe46a95753c4c076551c8c6] <==
	I0920 20:48:51.342179       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 20:48:51.371362       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 20:48:51.371432       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 20:48:51.522612       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 20:48:51.529164       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-628754_1fadd5d2-4217-4dae-b00a-947684a01256!
	I0920 20:48:51.523653       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7272c10f-2050-4885-9a02-b4fcd176512f", APIVersion:"v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-628754_1fadd5d2-4217-4dae-b00a-947684a01256 became leader
	I0920 20:48:51.731924       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-628754_1fadd5d2-4217-4dae-b00a-947684a01256!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-628754 -n addons-628754
helpers_test.go:261: (dbg) Run:  kubectl --context addons-628754 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox metrics-server-84c5f94fbc-jcnct
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-628754 describe pod busybox metrics-server-84c5f94fbc-jcnct
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-628754 describe pod busybox metrics-server-84c5f94fbc-jcnct: exit status 1 (85.4181ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-628754/192.168.39.204
	Start Time:       Fri, 20 Sep 2024 20:50:09 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.22
	IPs:
	  IP:  10.244.0.22
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s2lkf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-s2lkf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  14m                   default-scheduler  Successfully assigned default/busybox to addons-628754
	  Normal   Pulling    12m (x4 over 14m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 14m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 14m)     kubelet            Error: ErrImagePull
	  Warning  Failed     12m (x6 over 14m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m12s (x43 over 14m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "metrics-server-84c5f94fbc-jcnct" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-628754 describe pod busybox metrics-server-84c5f94fbc-jcnct: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (356.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 node stop m02 -v=7 --alsologtostderr
E0920 21:19:52.784550   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:20:09.132215   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:20:33.746155   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:21:32.203855   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-797954 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.480574638s)

                                                
                                                
-- stdout --
	* Stopping node "ha-797954-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:19:35.896670   34044 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:19:35.896818   34044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:19:35.896827   34044 out.go:358] Setting ErrFile to fd 2...
	I0920 21:19:35.896831   34044 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:19:35.896995   34044 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 21:19:35.897281   34044 mustload.go:65] Loading cluster: ha-797954
	I0920 21:19:35.897769   34044 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:19:35.897799   34044 stop.go:39] StopHost: ha-797954-m02
	I0920 21:19:35.898171   34044 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:19:35.898206   34044 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:19:35.913885   34044 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33675
	I0920 21:19:35.914399   34044 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:19:35.914920   34044 main.go:141] libmachine: Using API Version  1
	I0920 21:19:35.914939   34044 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:19:35.915286   34044 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:19:35.917625   34044 out.go:177] * Stopping node "ha-797954-m02"  ...
	I0920 21:19:35.918748   34044 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 21:19:35.918783   34044 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:19:35.919003   34044 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 21:19:35.919029   34044 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:19:35.921614   34044 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:19:35.922073   34044 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:19:35.922103   34044 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:19:35.922409   34044 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:19:35.922587   34044 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:19:35.922740   34044 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:19:35.922878   34044 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa Username:docker}
	I0920 21:19:36.015495   34044 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 21:19:36.072093   34044 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 21:19:36.127595   34044 main.go:141] libmachine: Stopping "ha-797954-m02"...
	I0920 21:19:36.127620   34044 main.go:141] libmachine: (ha-797954-m02) Calling .GetState
	I0920 21:19:36.128923   34044 main.go:141] libmachine: (ha-797954-m02) Calling .Stop
	I0920 21:19:36.132515   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 0/120
	I0920 21:19:37.133857   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 1/120
	I0920 21:19:38.135185   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 2/120
	I0920 21:19:39.137413   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 3/120
	I0920 21:19:40.138813   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 4/120
	I0920 21:19:41.140334   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 5/120
	I0920 21:19:42.141634   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 6/120
	I0920 21:19:43.143121   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 7/120
	I0920 21:19:44.145342   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 8/120
	I0920 21:19:45.146734   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 9/120
	I0920 21:19:46.148843   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 10/120
	I0920 21:19:47.150260   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 11/120
	I0920 21:19:48.151728   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 12/120
	I0920 21:19:49.153196   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 13/120
	I0920 21:19:50.154432   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 14/120
	I0920 21:19:51.156188   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 15/120
	I0920 21:19:52.157696   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 16/120
	I0920 21:19:53.159179   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 17/120
	I0920 21:19:54.161504   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 18/120
	I0920 21:19:55.163154   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 19/120
	I0920 21:19:56.165596   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 20/120
	I0920 21:19:57.167428   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 21/120
	I0920 21:19:58.170045   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 22/120
	I0920 21:19:59.171473   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 23/120
	I0920 21:20:00.173615   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 24/120
	I0920 21:20:01.175087   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 25/120
	I0920 21:20:02.177499   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 26/120
	I0920 21:20:03.178975   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 27/120
	I0920 21:20:04.180115   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 28/120
	I0920 21:20:05.181548   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 29/120
	I0920 21:20:06.183670   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 30/120
	I0920 21:20:07.185498   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 31/120
	I0920 21:20:08.186996   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 32/120
	I0920 21:20:09.189344   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 33/120
	I0920 21:20:10.190703   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 34/120
	I0920 21:20:11.192582   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 35/120
	I0920 21:20:12.193997   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 36/120
	I0920 21:20:13.195426   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 37/120
	I0920 21:20:14.197439   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 38/120
	I0920 21:20:15.198786   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 39/120
	I0920 21:20:16.200924   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 40/120
	I0920 21:20:17.203244   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 41/120
	I0920 21:20:18.204532   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 42/120
	I0920 21:20:19.206238   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 43/120
	I0920 21:20:20.207523   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 44/120
	I0920 21:20:21.208965   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 45/120
	I0920 21:20:22.210673   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 46/120
	I0920 21:20:23.212384   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 47/120
	I0920 21:20:24.213810   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 48/120
	I0920 21:20:25.216035   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 49/120
	I0920 21:20:26.218150   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 50/120
	I0920 21:20:27.219565   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 51/120
	I0920 21:20:28.221452   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 52/120
	I0920 21:20:29.222844   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 53/120
	I0920 21:20:30.224213   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 54/120
	I0920 21:20:31.226342   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 55/120
	I0920 21:20:32.228040   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 56/120
	I0920 21:20:33.229396   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 57/120
	I0920 21:20:34.230821   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 58/120
	I0920 21:20:35.232249   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 59/120
	I0920 21:20:36.234337   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 60/120
	I0920 21:20:37.235823   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 61/120
	I0920 21:20:38.237016   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 62/120
	I0920 21:20:39.238557   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 63/120
	I0920 21:20:40.240166   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 64/120
	I0920 21:20:41.242250   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 65/120
	I0920 21:20:42.243713   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 66/120
	I0920 21:20:43.245285   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 67/120
	I0920 21:20:44.246612   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 68/120
	I0920 21:20:45.248414   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 69/120
	I0920 21:20:46.250368   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 70/120
	I0920 21:20:47.252658   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 71/120
	I0920 21:20:48.253836   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 72/120
	I0920 21:20:49.255276   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 73/120
	I0920 21:20:50.256409   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 74/120
	I0920 21:20:51.258250   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 75/120
	I0920 21:20:52.259488   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 76/120
	I0920 21:20:53.261502   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 77/120
	I0920 21:20:54.262754   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 78/120
	I0920 21:20:55.264130   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 79/120
	I0920 21:20:56.266491   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 80/120
	I0920 21:20:57.267902   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 81/120
	I0920 21:20:58.269496   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 82/120
	I0920 21:20:59.270988   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 83/120
	I0920 21:21:00.272980   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 84/120
	I0920 21:21:01.275187   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 85/120
	I0920 21:21:02.277456   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 86/120
	I0920 21:21:03.278955   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 87/120
	I0920 21:21:04.280460   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 88/120
	I0920 21:21:05.282764   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 89/120
	I0920 21:21:06.284922   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 90/120
	I0920 21:21:07.287014   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 91/120
	I0920 21:21:08.288608   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 92/120
	I0920 21:21:09.290246   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 93/120
	I0920 21:21:10.291736   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 94/120
	I0920 21:21:11.293720   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 95/120
	I0920 21:21:12.295051   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 96/120
	I0920 21:21:13.296370   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 97/120
	I0920 21:21:14.298237   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 98/120
	I0920 21:21:15.299632   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 99/120
	I0920 21:21:16.302053   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 100/120
	I0920 21:21:17.304363   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 101/120
	I0920 21:21:18.305818   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 102/120
	I0920 21:21:19.307274   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 103/120
	I0920 21:21:20.309455   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 104/120
	I0920 21:21:21.311543   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 105/120
	I0920 21:21:22.312801   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 106/120
	I0920 21:21:23.314465   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 107/120
	I0920 21:21:24.316051   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 108/120
	I0920 21:21:25.317358   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 109/120
	I0920 21:21:26.319725   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 110/120
	I0920 21:21:27.321447   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 111/120
	I0920 21:21:28.322744   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 112/120
	I0920 21:21:29.324975   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 113/120
	I0920 21:21:30.326451   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 114/120
	I0920 21:21:31.328428   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 115/120
	I0920 21:21:32.329852   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 116/120
	I0920 21:21:33.331214   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 117/120
	I0920 21:21:34.333364   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 118/120
	I0920 21:21:35.334818   34044 main.go:141] libmachine: (ha-797954-m02) Waiting for machine to stop 119/120
	I0920 21:21:36.336062   34044 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 21:21:36.336184   34044 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-797954 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Done: out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr: (18.706776643s)
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr": 
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr": 
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr": 
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-797954 -n ha-797954
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 logs -n 25
E0920 21:21:55.668358   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-797954 logs -n 25: (1.447583275s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3341505942/001/cp-test_ha-797954-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954:/home/docker/cp-test_ha-797954-m03_ha-797954.txt                       |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954 sudo cat                                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m03_ha-797954.txt                                 |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m02:/home/docker/cp-test_ha-797954-m03_ha-797954-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m02 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m03_ha-797954-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04:/home/docker/cp-test_ha-797954-m03_ha-797954-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m04 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m03_ha-797954-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp testdata/cp-test.txt                                                | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3341505942/001/cp-test_ha-797954-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954:/home/docker/cp-test_ha-797954-m04_ha-797954.txt                       |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954 sudo cat                                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954.txt                                 |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m02:/home/docker/cp-test_ha-797954-m04_ha-797954-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m02 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03:/home/docker/cp-test_ha-797954-m04_ha-797954-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m03 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-797954 node stop m02 -v=7                                                     | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 21:14:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 21:14:58.093383   29546 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:14:58.093518   29546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:14:58.093529   29546 out.go:358] Setting ErrFile to fd 2...
	I0920 21:14:58.093536   29546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:14:58.093721   29546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 21:14:58.094299   29546 out.go:352] Setting JSON to false
	I0920 21:14:58.095137   29546 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3446,"bootTime":1726863452,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:14:58.095237   29546 start.go:139] virtualization: kvm guest
	I0920 21:14:58.097575   29546 out.go:177] * [ha-797954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 21:14:58.099174   29546 notify.go:220] Checking for updates...
	I0920 21:14:58.099184   29546 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:14:58.100699   29546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:14:58.102318   29546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:14:58.103731   29546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:14:58.105067   29546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:14:58.106539   29546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:14:58.108120   29546 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:14:58.144556   29546 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 21:14:58.146098   29546 start.go:297] selected driver: kvm2
	I0920 21:14:58.146111   29546 start.go:901] validating driver "kvm2" against <nil>
	I0920 21:14:58.146123   29546 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:14:58.146850   29546 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:14:58.146917   29546 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 21:14:58.162003   29546 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 21:14:58.162056   29546 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 21:14:58.162303   29546 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 21:14:58.162329   29546 cni.go:84] Creating CNI manager for ""
	I0920 21:14:58.162373   29546 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 21:14:58.162381   29546 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 21:14:58.162430   29546 start.go:340] cluster config:
	{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0920 21:14:58.162516   29546 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:14:58.164481   29546 out.go:177] * Starting "ha-797954" primary control-plane node in "ha-797954" cluster
	I0920 21:14:58.165856   29546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:14:58.165884   29546 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 21:14:58.165902   29546 cache.go:56] Caching tarball of preloaded images
	I0920 21:14:58.165974   29546 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 21:14:58.165984   29546 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 21:14:58.166298   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:14:58.166316   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json: {Name:mk8fa57fb542c3761d64832828491b92c6ccc2fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:14:58.166454   29546 start.go:360] acquireMachinesLock for ha-797954: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 21:14:58.166482   29546 start.go:364] duration metric: took 15.486µs to acquireMachinesLock for "ha-797954"
	I0920 21:14:58.166497   29546 start.go:93] Provisioning new machine with config: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:14:58.166552   29546 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 21:14:58.168264   29546 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 21:14:58.168395   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:14:58.168428   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:14:58.182843   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I0920 21:14:58.183355   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:14:58.183917   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:14:58.183941   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:14:58.184213   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:14:58.184397   29546 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:14:58.184524   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:14:58.184653   29546 start.go:159] libmachine.API.Create for "ha-797954" (driver="kvm2")
	I0920 21:14:58.184689   29546 client.go:168] LocalClient.Create starting
	I0920 21:14:58.184716   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 21:14:58.184749   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:14:58.184762   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:14:58.184806   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 21:14:58.184823   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:14:58.184836   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:14:58.184851   29546 main.go:141] libmachine: Running pre-create checks...
	I0920 21:14:58.184863   29546 main.go:141] libmachine: (ha-797954) Calling .PreCreateCheck
	I0920 21:14:58.185178   29546 main.go:141] libmachine: (ha-797954) Calling .GetConfigRaw
	I0920 21:14:58.185555   29546 main.go:141] libmachine: Creating machine...
	I0920 21:14:58.185568   29546 main.go:141] libmachine: (ha-797954) Calling .Create
	I0920 21:14:58.185703   29546 main.go:141] libmachine: (ha-797954) Creating KVM machine...
	I0920 21:14:58.186926   29546 main.go:141] libmachine: (ha-797954) DBG | found existing default KVM network
	I0920 21:14:58.187682   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.187531   29569 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0920 21:14:58.187702   29546 main.go:141] libmachine: (ha-797954) DBG | created network xml: 
	I0920 21:14:58.187715   29546 main.go:141] libmachine: (ha-797954) DBG | <network>
	I0920 21:14:58.187726   29546 main.go:141] libmachine: (ha-797954) DBG |   <name>mk-ha-797954</name>
	I0920 21:14:58.187736   29546 main.go:141] libmachine: (ha-797954) DBG |   <dns enable='no'/>
	I0920 21:14:58.187745   29546 main.go:141] libmachine: (ha-797954) DBG |   
	I0920 21:14:58.187755   29546 main.go:141] libmachine: (ha-797954) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 21:14:58.187763   29546 main.go:141] libmachine: (ha-797954) DBG |     <dhcp>
	I0920 21:14:58.187769   29546 main.go:141] libmachine: (ha-797954) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 21:14:58.187775   29546 main.go:141] libmachine: (ha-797954) DBG |     </dhcp>
	I0920 21:14:58.187783   29546 main.go:141] libmachine: (ha-797954) DBG |   </ip>
	I0920 21:14:58.187799   29546 main.go:141] libmachine: (ha-797954) DBG |   
	I0920 21:14:58.187840   29546 main.go:141] libmachine: (ha-797954) DBG | </network>
	I0920 21:14:58.187855   29546 main.go:141] libmachine: (ha-797954) DBG | 
	I0920 21:14:58.192877   29546 main.go:141] libmachine: (ha-797954) DBG | trying to create private KVM network mk-ha-797954 192.168.39.0/24...
	I0920 21:14:58.259633   29546 main.go:141] libmachine: (ha-797954) DBG | private KVM network mk-ha-797954 192.168.39.0/24 created
	I0920 21:14:58.259663   29546 main.go:141] libmachine: (ha-797954) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954 ...
	I0920 21:14:58.259676   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.259606   29569 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:14:58.259694   29546 main.go:141] libmachine: (ha-797954) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 21:14:58.259758   29546 main.go:141] libmachine: (ha-797954) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 21:14:58.505872   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.505739   29569 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa...
	I0920 21:14:58.596789   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.596635   29569 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/ha-797954.rawdisk...
	I0920 21:14:58.596822   29546 main.go:141] libmachine: (ha-797954) DBG | Writing magic tar header
	I0920 21:14:58.596834   29546 main.go:141] libmachine: (ha-797954) DBG | Writing SSH key tar header
	I0920 21:14:58.596844   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.596754   29569 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954 ...
	I0920 21:14:58.596858   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954
	I0920 21:14:58.596904   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954 (perms=drwx------)
	I0920 21:14:58.596921   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 21:14:58.596928   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 21:14:58.596967   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:14:58.596981   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 21:14:58.596994   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 21:14:58.597010   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 21:14:58.597027   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 21:14:58.597042   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 21:14:58.597053   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins
	I0920 21:14:58.597060   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home
	I0920 21:14:58.597068   29546 main.go:141] libmachine: (ha-797954) DBG | Skipping /home - not owner
	I0920 21:14:58.597077   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 21:14:58.597083   29546 main.go:141] libmachine: (ha-797954) Creating domain...
	I0920 21:14:58.598143   29546 main.go:141] libmachine: (ha-797954) define libvirt domain using xml: 
	I0920 21:14:58.598171   29546 main.go:141] libmachine: (ha-797954) <domain type='kvm'>
	I0920 21:14:58.598181   29546 main.go:141] libmachine: (ha-797954)   <name>ha-797954</name>
	I0920 21:14:58.598192   29546 main.go:141] libmachine: (ha-797954)   <memory unit='MiB'>2200</memory>
	I0920 21:14:58.598202   29546 main.go:141] libmachine: (ha-797954)   <vcpu>2</vcpu>
	I0920 21:14:58.598215   29546 main.go:141] libmachine: (ha-797954)   <features>
	I0920 21:14:58.598243   29546 main.go:141] libmachine: (ha-797954)     <acpi/>
	I0920 21:14:58.598261   29546 main.go:141] libmachine: (ha-797954)     <apic/>
	I0920 21:14:58.598269   29546 main.go:141] libmachine: (ha-797954)     <pae/>
	I0920 21:14:58.598292   29546 main.go:141] libmachine: (ha-797954)     
	I0920 21:14:58.598311   29546 main.go:141] libmachine: (ha-797954)   </features>
	I0920 21:14:58.598326   29546 main.go:141] libmachine: (ha-797954)   <cpu mode='host-passthrough'>
	I0920 21:14:58.598354   29546 main.go:141] libmachine: (ha-797954)   
	I0920 21:14:58.598373   29546 main.go:141] libmachine: (ha-797954)   </cpu>
	I0920 21:14:58.598388   29546 main.go:141] libmachine: (ha-797954)   <os>
	I0920 21:14:58.598403   29546 main.go:141] libmachine: (ha-797954)     <type>hvm</type>
	I0920 21:14:58.598417   29546 main.go:141] libmachine: (ha-797954)     <boot dev='cdrom'/>
	I0920 21:14:58.598424   29546 main.go:141] libmachine: (ha-797954)     <boot dev='hd'/>
	I0920 21:14:58.598430   29546 main.go:141] libmachine: (ha-797954)     <bootmenu enable='no'/>
	I0920 21:14:58.598435   29546 main.go:141] libmachine: (ha-797954)   </os>
	I0920 21:14:58.598441   29546 main.go:141] libmachine: (ha-797954)   <devices>
	I0920 21:14:58.598451   29546 main.go:141] libmachine: (ha-797954)     <disk type='file' device='cdrom'>
	I0920 21:14:58.598467   29546 main.go:141] libmachine: (ha-797954)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/boot2docker.iso'/>
	I0920 21:14:58.598482   29546 main.go:141] libmachine: (ha-797954)       <target dev='hdc' bus='scsi'/>
	I0920 21:14:58.598498   29546 main.go:141] libmachine: (ha-797954)       <readonly/>
	I0920 21:14:58.598520   29546 main.go:141] libmachine: (ha-797954)     </disk>
	I0920 21:14:58.598530   29546 main.go:141] libmachine: (ha-797954)     <disk type='file' device='disk'>
	I0920 21:14:58.598537   29546 main.go:141] libmachine: (ha-797954)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 21:14:58.598552   29546 main.go:141] libmachine: (ha-797954)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/ha-797954.rawdisk'/>
	I0920 21:14:58.598567   29546 main.go:141] libmachine: (ha-797954)       <target dev='hda' bus='virtio'/>
	I0920 21:14:58.598578   29546 main.go:141] libmachine: (ha-797954)     </disk>
	I0920 21:14:58.598592   29546 main.go:141] libmachine: (ha-797954)     <interface type='network'>
	I0920 21:14:58.598603   29546 main.go:141] libmachine: (ha-797954)       <source network='mk-ha-797954'/>
	I0920 21:14:58.598612   29546 main.go:141] libmachine: (ha-797954)       <model type='virtio'/>
	I0920 21:14:58.598623   29546 main.go:141] libmachine: (ha-797954)     </interface>
	I0920 21:14:58.598631   29546 main.go:141] libmachine: (ha-797954)     <interface type='network'>
	I0920 21:14:58.598639   29546 main.go:141] libmachine: (ha-797954)       <source network='default'/>
	I0920 21:14:58.598650   29546 main.go:141] libmachine: (ha-797954)       <model type='virtio'/>
	I0920 21:14:58.598658   29546 main.go:141] libmachine: (ha-797954)     </interface>
	I0920 21:14:58.598668   29546 main.go:141] libmachine: (ha-797954)     <serial type='pty'>
	I0920 21:14:58.598676   29546 main.go:141] libmachine: (ha-797954)       <target port='0'/>
	I0920 21:14:58.598685   29546 main.go:141] libmachine: (ha-797954)     </serial>
	I0920 21:14:58.598696   29546 main.go:141] libmachine: (ha-797954)     <console type='pty'>
	I0920 21:14:58.598706   29546 main.go:141] libmachine: (ha-797954)       <target type='serial' port='0'/>
	I0920 21:14:58.598718   29546 main.go:141] libmachine: (ha-797954)     </console>
	I0920 21:14:58.598725   29546 main.go:141] libmachine: (ha-797954)     <rng model='virtio'>
	I0920 21:14:58.598738   29546 main.go:141] libmachine: (ha-797954)       <backend model='random'>/dev/random</backend>
	I0920 21:14:58.598754   29546 main.go:141] libmachine: (ha-797954)     </rng>
	I0920 21:14:58.598766   29546 main.go:141] libmachine: (ha-797954)     
	I0920 21:14:58.598780   29546 main.go:141] libmachine: (ha-797954)     
	I0920 21:14:58.598790   29546 main.go:141] libmachine: (ha-797954)   </devices>
	I0920 21:14:58.598799   29546 main.go:141] libmachine: (ha-797954) </domain>
	I0920 21:14:58.598807   29546 main.go:141] libmachine: (ha-797954) 
	I0920 21:14:58.603113   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:c6:03:52 in network default
	I0920 21:14:58.603707   29546 main.go:141] libmachine: (ha-797954) Ensuring networks are active...
	I0920 21:14:58.603742   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:14:58.604393   29546 main.go:141] libmachine: (ha-797954) Ensuring network default is active
	I0920 21:14:58.604660   29546 main.go:141] libmachine: (ha-797954) Ensuring network mk-ha-797954 is active
	I0920 21:14:58.605088   29546 main.go:141] libmachine: (ha-797954) Getting domain xml...
	I0920 21:14:58.605647   29546 main.go:141] libmachine: (ha-797954) Creating domain...
	I0920 21:14:59.802448   29546 main.go:141] libmachine: (ha-797954) Waiting to get IP...
	I0920 21:14:59.803386   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:14:59.803845   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:14:59.803891   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:59.803834   29569 retry.go:31] will retry after 223.587991ms: waiting for machine to come up
	I0920 21:15:00.029437   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:00.029858   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:00.029879   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:00.029830   29569 retry.go:31] will retry after 354.177356ms: waiting for machine to come up
	I0920 21:15:00.385460   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:00.385795   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:00.385813   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:00.385762   29569 retry.go:31] will retry after 379.865741ms: waiting for machine to come up
	I0920 21:15:00.767361   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:00.767813   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:00.767841   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:00.767778   29569 retry.go:31] will retry after 505.320471ms: waiting for machine to come up
	I0920 21:15:01.274395   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:01.274767   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:01.274801   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:01.274739   29569 retry.go:31] will retry after 590.453192ms: waiting for machine to come up
	I0920 21:15:01.866507   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:01.866905   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:01.866927   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:01.866860   29569 retry.go:31] will retry after 645.79905ms: waiting for machine to come up
	I0920 21:15:02.514921   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:02.515518   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:02.515553   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:02.515465   29569 retry.go:31] will retry after 946.798883ms: waiting for machine to come up
	I0920 21:15:03.464250   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:03.464679   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:03.464704   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:03.464640   29569 retry.go:31] will retry after 983.162198ms: waiting for machine to come up
	I0920 21:15:04.450382   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:04.450835   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:04.450858   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:04.450809   29569 retry.go:31] will retry after 1.725909532s: waiting for machine to come up
	I0920 21:15:06.178808   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:06.179200   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:06.179226   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:06.179150   29569 retry.go:31] will retry after 2.07990615s: waiting for machine to come up
	I0920 21:15:08.260549   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:08.261003   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:08.261030   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:08.260968   29569 retry.go:31] will retry after 2.83157909s: waiting for machine to come up
	I0920 21:15:11.093814   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:11.094191   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:11.094225   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:11.094158   29569 retry.go:31] will retry after 3.472040975s: waiting for machine to come up
	I0920 21:15:14.569242   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:14.569658   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:14.569683   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:14.569603   29569 retry.go:31] will retry after 2.791417275s: waiting for machine to come up
	I0920 21:15:17.364625   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:17.365099   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:17.365116   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:17.365066   29569 retry.go:31] will retry after 5.415425974s: waiting for machine to come up
	I0920 21:15:22.785521   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.786004   29546 main.go:141] libmachine: (ha-797954) Found IP for machine: 192.168.39.240
	I0920 21:15:22.786028   29546 main.go:141] libmachine: (ha-797954) Reserving static IP address...
	I0920 21:15:22.786041   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has current primary IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.786330   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find host DHCP lease matching {name: "ha-797954", mac: "52:54:00:62:bb:49", ip: "192.168.39.240"} in network mk-ha-797954
	I0920 21:15:22.857285   29546 main.go:141] libmachine: (ha-797954) Reserved static IP address: 192.168.39.240
	I0920 21:15:22.857326   29546 main.go:141] libmachine: (ha-797954) Waiting for SSH to be available...
	I0920 21:15:22.857348   29546 main.go:141] libmachine: (ha-797954) DBG | Getting to WaitForSSH function...
	I0920 21:15:22.859900   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.860220   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:22.860240   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.860390   29546 main.go:141] libmachine: (ha-797954) DBG | Using SSH client type: external
	I0920 21:15:22.860414   29546 main.go:141] libmachine: (ha-797954) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa (-rw-------)
	I0920 21:15:22.860439   29546 main.go:141] libmachine: (ha-797954) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:15:22.860456   29546 main.go:141] libmachine: (ha-797954) DBG | About to run SSH command:
	I0920 21:15:22.860469   29546 main.go:141] libmachine: (ha-797954) DBG | exit 0
	I0920 21:15:22.987393   29546 main.go:141] libmachine: (ha-797954) DBG | SSH cmd err, output: <nil>: 
	I0920 21:15:22.987618   29546 main.go:141] libmachine: (ha-797954) KVM machine creation complete!
	I0920 21:15:22.987926   29546 main.go:141] libmachine: (ha-797954) Calling .GetConfigRaw
	I0920 21:15:22.988498   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:22.988678   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:22.988799   29546 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 21:15:22.988815   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:15:22.990056   29546 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 21:15:22.990073   29546 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 21:15:22.990078   29546 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 21:15:22.990084   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:22.992361   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.992791   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:22.992813   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.992947   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:22.993123   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:22.993226   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:22.993339   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:22.993444   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:22.993635   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:22.993647   29546 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 21:15:23.098405   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:15:23.098428   29546 main.go:141] libmachine: Detecting the provisioner...
	I0920 21:15:23.098437   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.101054   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.101432   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.101460   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.101637   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.101840   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.101988   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.102095   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.102248   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:23.102456   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:23.102471   29546 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 21:15:23.207907   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 21:15:23.208033   29546 main.go:141] libmachine: found compatible host: buildroot
	I0920 21:15:23.208051   29546 main.go:141] libmachine: Provisioning with buildroot...
	I0920 21:15:23.208064   29546 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:15:23.208298   29546 buildroot.go:166] provisioning hostname "ha-797954"
	I0920 21:15:23.208319   29546 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:15:23.208502   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.211194   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.211511   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.211538   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.211661   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.211829   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.212009   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.212107   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.212272   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:23.212442   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:23.212456   29546 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-797954 && echo "ha-797954" | sudo tee /etc/hostname
	I0920 21:15:23.333898   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-797954
	
	I0920 21:15:23.333955   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.336557   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.337630   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.337652   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.337927   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.338114   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.338300   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.338451   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.338601   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:23.338766   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:23.338781   29546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-797954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-797954/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-797954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 21:15:23.457012   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:15:23.457045   29546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 21:15:23.457068   29546 buildroot.go:174] setting up certificates
	I0920 21:15:23.457076   29546 provision.go:84] configureAuth start
	I0920 21:15:23.457084   29546 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:15:23.457358   29546 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:15:23.460274   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.460523   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.460544   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.460743   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.463346   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.463695   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.463721   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.463823   29546 provision.go:143] copyHostCerts
	I0920 21:15:23.463873   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:15:23.463907   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 21:15:23.463922   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:15:23.464005   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 21:15:23.464111   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:15:23.464141   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 21:15:23.464151   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:15:23.464195   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 21:15:23.464272   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:15:23.464310   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 21:15:23.464319   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:15:23.464355   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 21:15:23.464469   29546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.ha-797954 san=[127.0.0.1 192.168.39.240 ha-797954 localhost minikube]
	I0920 21:15:23.729781   29546 provision.go:177] copyRemoteCerts
	I0920 21:15:23.729854   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 21:15:23.729878   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.732636   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.733071   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.733098   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.733347   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.733569   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.733717   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.733849   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:23.819565   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 21:15:23.819631   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 21:15:23.849592   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 21:15:23.849659   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 21:15:23.875769   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 21:15:23.875849   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 21:15:23.901303   29546 provision.go:87] duration metric: took 444.215684ms to configureAuth
	I0920 21:15:23.901335   29546 buildroot.go:189] setting minikube options for container-runtime
	I0920 21:15:23.901537   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:15:23.901620   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.904924   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.905466   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.905493   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.905749   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.905969   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.906138   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.906294   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.906464   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:23.906655   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:23.906678   29546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 21:15:24.134358   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 21:15:24.134386   29546 main.go:141] libmachine: Checking connection to Docker...
	I0920 21:15:24.134394   29546 main.go:141] libmachine: (ha-797954) Calling .GetURL
	I0920 21:15:24.135777   29546 main.go:141] libmachine: (ha-797954) DBG | Using libvirt version 6000000
	I0920 21:15:24.138158   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.138556   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.138587   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.138722   29546 main.go:141] libmachine: Docker is up and running!
	I0920 21:15:24.138737   29546 main.go:141] libmachine: Reticulating splines...
	I0920 21:15:24.138745   29546 client.go:171] duration metric: took 25.95404705s to LocalClient.Create
	I0920 21:15:24.138772   29546 start.go:167] duration metric: took 25.954119175s to libmachine.API.Create "ha-797954"
	I0920 21:15:24.138783   29546 start.go:293] postStartSetup for "ha-797954" (driver="kvm2")
	I0920 21:15:24.138798   29546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 21:15:24.138818   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.139051   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 21:15:24.139076   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:24.140985   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.141346   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.141374   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.141499   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:24.141674   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.141828   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:24.142019   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:24.225959   29546 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 21:15:24.230124   29546 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 21:15:24.230144   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 21:15:24.230206   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 21:15:24.230307   29546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 21:15:24.230322   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /etc/ssl/certs/167852.pem
	I0920 21:15:24.230441   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 21:15:24.240575   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:15:24.264696   29546 start.go:296] duration metric: took 125.900056ms for postStartSetup
	I0920 21:15:24.264741   29546 main.go:141] libmachine: (ha-797954) Calling .GetConfigRaw
	I0920 21:15:24.265475   29546 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:15:24.267916   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.268228   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.268259   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.268458   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:15:24.268638   29546 start.go:128] duration metric: took 26.102077231s to createHost
	I0920 21:15:24.268658   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:24.270797   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.271111   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.271130   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.271302   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:24.271452   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.271594   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.271702   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:24.271838   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:24.272013   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:24.272027   29546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 21:15:24.379993   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726866924.358104924
	
	I0920 21:15:24.380018   29546 fix.go:216] guest clock: 1726866924.358104924
	I0920 21:15:24.380025   29546 fix.go:229] Guest: 2024-09-20 21:15:24.358104924 +0000 UTC Remote: 2024-09-20 21:15:24.268649465 +0000 UTC m=+26.209850110 (delta=89.455459ms)
	I0920 21:15:24.380055   29546 fix.go:200] guest clock delta is within tolerance: 89.455459ms
	I0920 21:15:24.380059   29546 start.go:83] releasing machines lock for "ha-797954", held for 26.213569983s
	I0920 21:15:24.380077   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.380328   29546 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:15:24.382878   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.383258   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.383280   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.383441   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.383944   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.384137   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.384211   29546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 21:15:24.384265   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:24.384357   29546 ssh_runner.go:195] Run: cat /version.json
	I0920 21:15:24.384382   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:24.386758   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.386983   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.387156   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.387178   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.387338   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:24.387455   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.387479   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.387524   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.387629   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:24.387698   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:24.387766   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.387832   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:24.387879   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:24.388006   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:24.464447   29546 ssh_runner.go:195] Run: systemctl --version
	I0920 21:15:24.489896   29546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 21:15:24.649570   29546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 21:15:24.656389   29546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 21:15:24.656455   29546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 21:15:24.672721   29546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 21:15:24.672744   29546 start.go:495] detecting cgroup driver to use...
	I0920 21:15:24.672817   29546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 21:15:24.689886   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 21:15:24.704352   29546 docker.go:217] disabling cri-docker service (if available) ...
	I0920 21:15:24.704400   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 21:15:24.718653   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 21:15:24.733865   29546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 21:15:24.855165   29546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 21:15:25.006541   29546 docker.go:233] disabling docker service ...
	I0920 21:15:25.006639   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 21:15:25.021286   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 21:15:25.034719   29546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 21:15:25.179842   29546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 21:15:25.303409   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 21:15:25.317836   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 21:15:25.337433   29546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 21:15:25.337507   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.348743   29546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 21:15:25.348803   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.359907   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.370733   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.381300   29546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 21:15:25.392436   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.403057   29546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.420657   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.430817   29546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 21:15:25.440277   29546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 21:15:25.440345   29546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 21:15:25.453579   29546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 21:15:25.463440   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:15:25.581550   29546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 21:15:25.681183   29546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 21:15:25.681259   29546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 21:15:25.686062   29546 start.go:563] Will wait 60s for crictl version
	I0920 21:15:25.686107   29546 ssh_runner.go:195] Run: which crictl
	I0920 21:15:25.690069   29546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 21:15:25.733819   29546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 21:15:25.733901   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:15:25.764147   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:15:25.795214   29546 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 21:15:25.796911   29546 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:15:25.799674   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:25.800015   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:25.800052   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:25.800321   29546 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 21:15:25.804808   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:15:25.818394   29546 kubeadm.go:883] updating cluster {Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 21:15:25.818488   29546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:15:25.818527   29546 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:15:25.850610   29546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 21:15:25.850709   29546 ssh_runner.go:195] Run: which lz4
	I0920 21:15:25.854671   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0920 21:15:25.854769   29546 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 21:15:25.859047   29546 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 21:15:25.859084   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 21:15:27.274564   29546 crio.go:462] duration metric: took 1.41982381s to copy over tarball
	I0920 21:15:27.274662   29546 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 21:15:29.326906   29546 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.052207909s)
	I0920 21:15:29.326937   29546 crio.go:469] duration metric: took 2.052348508s to extract the tarball
	I0920 21:15:29.326946   29546 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 21:15:29.364795   29546 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:15:29.420778   29546 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 21:15:29.420807   29546 cache_images.go:84] Images are preloaded, skipping loading
	I0920 21:15:29.420830   29546 kubeadm.go:934] updating node { 192.168.39.240 8443 v1.31.1 crio true true} ...
	I0920 21:15:29.420939   29546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-797954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 21:15:29.421019   29546 ssh_runner.go:195] Run: crio config
	I0920 21:15:29.467359   29546 cni.go:84] Creating CNI manager for ""
	I0920 21:15:29.467388   29546 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 21:15:29.467398   29546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 21:15:29.467423   29546 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-797954 NodeName:ha-797954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 21:15:29.467586   29546 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-797954"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 21:15:29.467615   29546 kube-vip.go:115] generating kube-vip config ...
	I0920 21:15:29.467665   29546 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 21:15:29.486323   29546 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 21:15:29.486445   29546 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0920 21:15:29.486507   29546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 21:15:29.497349   29546 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 21:15:29.497421   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 21:15:29.507626   29546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 21:15:29.524821   29546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 21:15:29.542162   29546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 21:15:29.559146   29546 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0920 21:15:29.575893   29546 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 21:15:29.579898   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:15:29.592448   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:15:29.726203   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:15:29.743307   29546 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954 for IP: 192.168.39.240
	I0920 21:15:29.743326   29546 certs.go:194] generating shared ca certs ...
	I0920 21:15:29.743341   29546 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.743501   29546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 21:15:29.743558   29546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 21:15:29.743572   29546 certs.go:256] generating profile certs ...
	I0920 21:15:29.743620   29546 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key
	I0920 21:15:29.743648   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt with IP's: []
	I0920 21:15:29.822848   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt ...
	I0920 21:15:29.822877   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt: {Name:mk9531e4d2ca25e31047a503b6559591e4241e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.823097   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key ...
	I0920 21:15:29.823113   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key: {Name:mk780710aa75c203f633d5fd9de901af5921b430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.823230   29546 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.e66f4b1a
	I0920 21:15:29.823254   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.e66f4b1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.254]
	I0920 21:15:29.947576   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.e66f4b1a ...
	I0920 21:15:29.947608   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.e66f4b1a: {Name:mk184aed753f9fcdadd87be1874f607263a47d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.947790   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.e66f4b1a ...
	I0920 21:15:29.947809   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.e66f4b1a: {Name:mke29d551b425a7cf297f097b7c968e0ec8e9532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.947904   29546 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.e66f4b1a -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt
	I0920 21:15:29.948003   29546 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.e66f4b1a -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key
	I0920 21:15:29.948077   29546 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key
	I0920 21:15:29.948097   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt with IP's: []
	I0920 21:15:30.056624   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt ...
	I0920 21:15:30.056653   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt: {Name:mk2dc3b0b0385ac4b3979e15bd90a0ff1e5ef2a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:30.056825   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key ...
	I0920 21:15:30.056842   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key: {Name:mkbe53da9f2669d754e44e4038f6b341d45e8e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:30.056936   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 21:15:30.056960   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 21:15:30.056983   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 21:15:30.057001   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 21:15:30.057014   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 21:15:30.057028   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 21:15:30.057044   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 21:15:30.057067   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 21:15:30.057124   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 21:15:30.057165   29546 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 21:15:30.057179   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 21:15:30.057219   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 21:15:30.057249   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 21:15:30.057279   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 21:15:30.057335   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:15:30.057375   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /usr/share/ca-certificates/167852.pem
	I0920 21:15:30.057397   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:15:30.057417   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem -> /usr/share/ca-certificates/16785.pem
	I0920 21:15:30.058000   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 21:15:30.084548   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 21:15:30.110347   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 21:15:30.134953   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 21:15:30.160708   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 21:15:30.184711   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 21:15:30.208901   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 21:15:30.233498   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 21:15:30.258374   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 21:15:30.283305   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 21:15:30.307934   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 21:15:30.331963   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 21:15:30.348398   29546 ssh_runner.go:195] Run: openssl version
	I0920 21:15:30.353975   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 21:15:30.365091   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 21:15:30.369663   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:15:30.369715   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 21:15:30.375452   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 21:15:30.386174   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 21:15:30.397261   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:15:30.401699   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:15:30.401757   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:15:30.407395   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 21:15:30.418060   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 21:15:30.429189   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 21:15:30.434030   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:15:30.434098   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 21:15:30.440220   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 21:15:30.451523   29546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:15:30.455873   29546 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 21:15:30.455921   29546 kubeadm.go:392] StartCluster: {Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:15:30.455994   29546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 21:15:30.456043   29546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 21:15:30.494216   29546 cri.go:89] found id: ""
	I0920 21:15:30.494275   29546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 21:15:30.504666   29546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 21:15:30.517367   29546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 21:15:30.527919   29546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 21:15:30.527949   29546 kubeadm.go:157] found existing configuration files:
	
	I0920 21:15:30.528010   29546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 21:15:30.538805   29546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 21:15:30.538875   29546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 21:15:30.550872   29546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 21:15:30.562292   29546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 21:15:30.562344   29546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 21:15:30.573641   29546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 21:15:30.586772   29546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 21:15:30.586843   29546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 21:15:30.596548   29546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 21:15:30.605570   29546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 21:15:30.605630   29546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 21:15:30.615094   29546 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 21:15:30.716781   29546 kubeadm.go:310] W0920 21:15:30.700841     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 21:15:30.717889   29546 kubeadm.go:310] W0920 21:15:30.702167     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 21:15:30.826716   29546 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 21:15:41.799746   29546 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 21:15:41.799822   29546 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 21:15:41.799918   29546 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 21:15:41.800049   29546 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 21:15:41.800185   29546 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 21:15:41.800260   29546 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 21:15:41.801703   29546 out.go:235]   - Generating certificates and keys ...
	I0920 21:15:41.801811   29546 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 21:15:41.801875   29546 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 21:15:41.801961   29546 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 21:15:41.802058   29546 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 21:15:41.802140   29546 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 21:15:41.802227   29546 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 21:15:41.802292   29546 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 21:15:41.802387   29546 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-797954 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0920 21:15:41.802435   29546 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 21:15:41.802542   29546 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-797954 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0920 21:15:41.802632   29546 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 21:15:41.802717   29546 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 21:15:41.802782   29546 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 21:15:41.802850   29546 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 21:15:41.802936   29546 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 21:15:41.803041   29546 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 21:15:41.803118   29546 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 21:15:41.803204   29546 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 21:15:41.803284   29546 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 21:15:41.803389   29546 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 21:15:41.803478   29546 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 21:15:41.804945   29546 out.go:235]   - Booting up control plane ...
	I0920 21:15:41.805074   29546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 21:15:41.805179   29546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 21:15:41.805284   29546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 21:15:41.805403   29546 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 21:15:41.805511   29546 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 21:15:41.805574   29546 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 21:15:41.805717   29546 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 21:15:41.805855   29546 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 21:15:41.805933   29546 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.138308ms
	I0920 21:15:41.806037   29546 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 21:15:41.806112   29546 kubeadm.go:310] [api-check] The API server is healthy after 6.067641334s
	I0920 21:15:41.806250   29546 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 21:15:41.806400   29546 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 21:15:41.806458   29546 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 21:15:41.806622   29546 kubeadm.go:310] [mark-control-plane] Marking the node ha-797954 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 21:15:41.806712   29546 kubeadm.go:310] [bootstrap-token] Using token: 5kg5kj.1p3ye5r3aa4bs2us
	I0920 21:15:41.808141   29546 out.go:235]   - Configuring RBAC rules ...
	I0920 21:15:41.808280   29546 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 21:15:41.808399   29546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 21:15:41.808574   29546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 21:15:41.808752   29546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 21:15:41.808897   29546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 21:15:41.809018   29546 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 21:15:41.809193   29546 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 21:15:41.809271   29546 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 21:15:41.809343   29546 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 21:15:41.809355   29546 kubeadm.go:310] 
	I0920 21:15:41.809445   29546 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 21:15:41.809464   29546 kubeadm.go:310] 
	I0920 21:15:41.809580   29546 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 21:15:41.809592   29546 kubeadm.go:310] 
	I0920 21:15:41.809629   29546 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 21:15:41.809688   29546 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 21:15:41.809745   29546 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 21:15:41.809754   29546 kubeadm.go:310] 
	I0920 21:15:41.809832   29546 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 21:15:41.809843   29546 kubeadm.go:310] 
	I0920 21:15:41.809916   29546 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 21:15:41.809926   29546 kubeadm.go:310] 
	I0920 21:15:41.810006   29546 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 21:15:41.810120   29546 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 21:15:41.810222   29546 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 21:15:41.810233   29546 kubeadm.go:310] 
	I0920 21:15:41.810391   29546 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 21:15:41.810518   29546 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 21:15:41.810529   29546 kubeadm.go:310] 
	I0920 21:15:41.810655   29546 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5kg5kj.1p3ye5r3aa4bs2us \
	I0920 21:15:41.810800   29546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 21:15:41.810821   29546 kubeadm.go:310] 	--control-plane 
	I0920 21:15:41.810824   29546 kubeadm.go:310] 
	I0920 21:15:41.810951   29546 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 21:15:41.810988   29546 kubeadm.go:310] 
	I0920 21:15:41.811099   29546 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5kg5kj.1p3ye5r3aa4bs2us \
	I0920 21:15:41.811201   29546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 21:15:41.811212   29546 cni.go:84] Creating CNI manager for ""
	I0920 21:15:41.811218   29546 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 21:15:41.812792   29546 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 21:15:41.814273   29546 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 21:15:41.819866   29546 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 21:15:41.819883   29546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 21:15:41.845113   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 21:15:42.242872   29546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 21:15:42.242979   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:42.243015   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-797954 minikube.k8s.io/updated_at=2024_09_20T21_15_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=ha-797954 minikube.k8s.io/primary=true
	I0920 21:15:42.278969   29546 ops.go:34] apiserver oom_adj: -16
	I0920 21:15:42.447710   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:42.948122   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:43.448766   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:43.948784   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:44.448608   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:44.948709   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:45.448374   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:45.948002   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:46.073838   29546 kubeadm.go:1113] duration metric: took 3.830954868s to wait for elevateKubeSystemPrivileges
	I0920 21:15:46.073874   29546 kubeadm.go:394] duration metric: took 15.617956047s to StartCluster
	I0920 21:15:46.073900   29546 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:46.073984   29546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:15:46.074611   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:46.074804   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 21:15:46.074835   29546 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:15:46.074859   29546 start.go:241] waiting for startup goroutines ...
	I0920 21:15:46.074866   29546 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 21:15:46.074942   29546 addons.go:69] Setting storage-provisioner=true in profile "ha-797954"
	I0920 21:15:46.074955   29546 addons.go:234] Setting addon storage-provisioner=true in "ha-797954"
	I0920 21:15:46.074983   29546 addons.go:69] Setting default-storageclass=true in profile "ha-797954"
	I0920 21:15:46.075001   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:15:46.075002   29546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-797954"
	I0920 21:15:46.075035   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:15:46.075376   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.075406   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.075455   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.075491   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.090209   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I0920 21:15:46.090325   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45781
	I0920 21:15:46.090697   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.090729   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.091211   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.091226   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.091404   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.091444   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.091507   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.091772   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.091914   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:15:46.092087   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.092132   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.094002   29546 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:15:46.094264   29546 kapi.go:59] client config for ha-797954: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key", CAFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 21:15:46.094718   29546 cert_rotation.go:140] Starting client certificate rotation controller
	I0920 21:15:46.094872   29546 addons.go:234] Setting addon default-storageclass=true in "ha-797954"
	I0920 21:15:46.094901   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:15:46.095181   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.095211   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.107218   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I0920 21:15:46.107712   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.108266   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.108304   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.108613   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.108761   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:15:46.109409   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35265
	I0920 21:15:46.109837   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.110315   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.110337   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.110525   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:46.110662   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.111094   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.111132   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.112949   29546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 21:15:46.114785   29546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 21:15:46.114809   29546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 21:15:46.114835   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:46.118672   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:46.119149   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:46.119203   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:46.119483   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:46.119680   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:46.119847   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:46.119996   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:46.128013   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I0920 21:15:46.128448   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.128883   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.128912   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.129281   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.129455   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:15:46.130827   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:46.131041   29546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 21:15:46.131057   29546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 21:15:46.131075   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:46.133356   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:46.133727   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:46.133763   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:46.133887   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:46.134032   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:46.134204   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:46.134338   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:46.285450   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 21:15:46.287399   29546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 21:15:46.336627   29546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 21:15:46.881947   29546 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 21:15:47.160811   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.160830   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.160889   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.160910   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.161142   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.161165   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.161176   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.161184   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.161272   29546 main.go:141] libmachine: (ha-797954) DBG | Closing plugin on server side
	I0920 21:15:47.161305   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.161318   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.161334   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.161364   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.161474   29546 main.go:141] libmachine: (ha-797954) DBG | Closing plugin on server side
	I0920 21:15:47.161485   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.161517   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.161590   29546 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 21:15:47.161611   29546 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 21:15:47.161708   29546 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0920 21:15:47.161717   29546 round_trippers.go:469] Request Headers:
	I0920 21:15:47.161727   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:15:47.161733   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:15:47.162842   29546 main.go:141] libmachine: (ha-797954) DBG | Closing plugin on server side
	I0920 21:15:47.164042   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.164114   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.176849   29546 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0920 21:15:47.177419   29546 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0920 21:15:47.177432   29546 round_trippers.go:469] Request Headers:
	I0920 21:15:47.177439   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:15:47.177443   29546 round_trippers.go:473]     Content-Type: application/json
	I0920 21:15:47.177445   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:15:47.180165   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:15:47.180318   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.180329   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.180614   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.180655   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.180655   29546 main.go:141] libmachine: (ha-797954) DBG | Closing plugin on server side
	I0920 21:15:47.182546   29546 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 21:15:47.183737   29546 addons.go:510] duration metric: took 1.108869145s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 21:15:47.183771   29546 start.go:246] waiting for cluster config update ...
	I0920 21:15:47.183786   29546 start.go:255] writing updated cluster config ...
	I0920 21:15:47.185521   29546 out.go:201] 
	I0920 21:15:47.187156   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:15:47.187233   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:15:47.188857   29546 out.go:177] * Starting "ha-797954-m02" control-plane node in "ha-797954" cluster
	I0920 21:15:47.190069   29546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:15:47.190087   29546 cache.go:56] Caching tarball of preloaded images
	I0920 21:15:47.190183   29546 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 21:15:47.190195   29546 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 21:15:47.190259   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:15:47.190407   29546 start.go:360] acquireMachinesLock for ha-797954-m02: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 21:15:47.190448   29546 start.go:364] duration metric: took 22.919µs to acquireMachinesLock for "ha-797954-m02"
	I0920 21:15:47.190469   29546 start.go:93] Provisioning new machine with config: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:15:47.190532   29546 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0920 21:15:47.192163   29546 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 21:15:47.192261   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:47.192300   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:47.207012   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37487
	I0920 21:15:47.207450   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:47.207959   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:47.207987   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:47.208333   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:47.208512   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetMachineName
	I0920 21:15:47.208644   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:15:47.208810   29546 start.go:159] libmachine.API.Create for "ha-797954" (driver="kvm2")
	I0920 21:15:47.208842   29546 client.go:168] LocalClient.Create starting
	I0920 21:15:47.208877   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 21:15:47.208918   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:15:47.208938   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:15:47.209003   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 21:15:47.209029   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:15:47.209043   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:15:47.209069   29546 main.go:141] libmachine: Running pre-create checks...
	I0920 21:15:47.209080   29546 main.go:141] libmachine: (ha-797954-m02) Calling .PreCreateCheck
	I0920 21:15:47.209235   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetConfigRaw
	I0920 21:15:47.209601   29546 main.go:141] libmachine: Creating machine...
	I0920 21:15:47.209615   29546 main.go:141] libmachine: (ha-797954-m02) Calling .Create
	I0920 21:15:47.209743   29546 main.go:141] libmachine: (ha-797954-m02) Creating KVM machine...
	I0920 21:15:47.211023   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found existing default KVM network
	I0920 21:15:47.211192   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found existing private KVM network mk-ha-797954
	I0920 21:15:47.211306   29546 main.go:141] libmachine: (ha-797954-m02) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02 ...
	I0920 21:15:47.211329   29546 main.go:141] libmachine: (ha-797954-m02) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 21:15:47.211404   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:47.211303   29933 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:15:47.211513   29546 main.go:141] libmachine: (ha-797954-m02) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 21:15:47.448349   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:47.448214   29933 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa...
	I0920 21:15:47.615677   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:47.615554   29933 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/ha-797954-m02.rawdisk...
	I0920 21:15:47.615707   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Writing magic tar header
	I0920 21:15:47.615718   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Writing SSH key tar header
	I0920 21:15:47.615726   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:47.615660   29933 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02 ...
	I0920 21:15:47.615794   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02
	I0920 21:15:47.615812   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02 (perms=drwx------)
	I0920 21:15:47.615820   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 21:15:47.615834   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:15:47.615839   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 21:15:47.615846   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 21:15:47.615854   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 21:15:47.615871   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins
	I0920 21:15:47.615884   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home
	I0920 21:15:47.615890   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Skipping /home - not owner
	I0920 21:15:47.615899   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 21:15:47.615913   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 21:15:47.615923   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 21:15:47.615931   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 21:15:47.615945   29546 main.go:141] libmachine: (ha-797954-m02) Creating domain...
	I0920 21:15:47.617011   29546 main.go:141] libmachine: (ha-797954-m02) define libvirt domain using xml: 
	I0920 21:15:47.617032   29546 main.go:141] libmachine: (ha-797954-m02) <domain type='kvm'>
	I0920 21:15:47.617040   29546 main.go:141] libmachine: (ha-797954-m02)   <name>ha-797954-m02</name>
	I0920 21:15:47.617044   29546 main.go:141] libmachine: (ha-797954-m02)   <memory unit='MiB'>2200</memory>
	I0920 21:15:47.617050   29546 main.go:141] libmachine: (ha-797954-m02)   <vcpu>2</vcpu>
	I0920 21:15:47.617057   29546 main.go:141] libmachine: (ha-797954-m02)   <features>
	I0920 21:15:47.617064   29546 main.go:141] libmachine: (ha-797954-m02)     <acpi/>
	I0920 21:15:47.617071   29546 main.go:141] libmachine: (ha-797954-m02)     <apic/>
	I0920 21:15:47.617079   29546 main.go:141] libmachine: (ha-797954-m02)     <pae/>
	I0920 21:15:47.617093   29546 main.go:141] libmachine: (ha-797954-m02)     
	I0920 21:15:47.617100   29546 main.go:141] libmachine: (ha-797954-m02)   </features>
	I0920 21:15:47.617108   29546 main.go:141] libmachine: (ha-797954-m02)   <cpu mode='host-passthrough'>
	I0920 21:15:47.617113   29546 main.go:141] libmachine: (ha-797954-m02)   
	I0920 21:15:47.617117   29546 main.go:141] libmachine: (ha-797954-m02)   </cpu>
	I0920 21:15:47.617121   29546 main.go:141] libmachine: (ha-797954-m02)   <os>
	I0920 21:15:47.617128   29546 main.go:141] libmachine: (ha-797954-m02)     <type>hvm</type>
	I0920 21:15:47.617133   29546 main.go:141] libmachine: (ha-797954-m02)     <boot dev='cdrom'/>
	I0920 21:15:47.617140   29546 main.go:141] libmachine: (ha-797954-m02)     <boot dev='hd'/>
	I0920 21:15:47.617147   29546 main.go:141] libmachine: (ha-797954-m02)     <bootmenu enable='no'/>
	I0920 21:15:47.617156   29546 main.go:141] libmachine: (ha-797954-m02)   </os>
	I0920 21:15:47.617173   29546 main.go:141] libmachine: (ha-797954-m02)   <devices>
	I0920 21:15:47.617187   29546 main.go:141] libmachine: (ha-797954-m02)     <disk type='file' device='cdrom'>
	I0920 21:15:47.617195   29546 main.go:141] libmachine: (ha-797954-m02)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/boot2docker.iso'/>
	I0920 21:15:47.617200   29546 main.go:141] libmachine: (ha-797954-m02)       <target dev='hdc' bus='scsi'/>
	I0920 21:15:47.617205   29546 main.go:141] libmachine: (ha-797954-m02)       <readonly/>
	I0920 21:15:47.617212   29546 main.go:141] libmachine: (ha-797954-m02)     </disk>
	I0920 21:15:47.617217   29546 main.go:141] libmachine: (ha-797954-m02)     <disk type='file' device='disk'>
	I0920 21:15:47.617226   29546 main.go:141] libmachine: (ha-797954-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 21:15:47.617233   29546 main.go:141] libmachine: (ha-797954-m02)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/ha-797954-m02.rawdisk'/>
	I0920 21:15:47.617247   29546 main.go:141] libmachine: (ha-797954-m02)       <target dev='hda' bus='virtio'/>
	I0920 21:15:47.617263   29546 main.go:141] libmachine: (ha-797954-m02)     </disk>
	I0920 21:15:47.617275   29546 main.go:141] libmachine: (ha-797954-m02)     <interface type='network'>
	I0920 21:15:47.617287   29546 main.go:141] libmachine: (ha-797954-m02)       <source network='mk-ha-797954'/>
	I0920 21:15:47.617297   29546 main.go:141] libmachine: (ha-797954-m02)       <model type='virtio'/>
	I0920 21:15:47.617304   29546 main.go:141] libmachine: (ha-797954-m02)     </interface>
	I0920 21:15:47.617314   29546 main.go:141] libmachine: (ha-797954-m02)     <interface type='network'>
	I0920 21:15:47.617325   29546 main.go:141] libmachine: (ha-797954-m02)       <source network='default'/>
	I0920 21:15:47.617331   29546 main.go:141] libmachine: (ha-797954-m02)       <model type='virtio'/>
	I0920 21:15:47.617336   29546 main.go:141] libmachine: (ha-797954-m02)     </interface>
	I0920 21:15:47.617350   29546 main.go:141] libmachine: (ha-797954-m02)     <serial type='pty'>
	I0920 21:15:47.617362   29546 main.go:141] libmachine: (ha-797954-m02)       <target port='0'/>
	I0920 21:15:47.617368   29546 main.go:141] libmachine: (ha-797954-m02)     </serial>
	I0920 21:15:47.617379   29546 main.go:141] libmachine: (ha-797954-m02)     <console type='pty'>
	I0920 21:15:47.617391   29546 main.go:141] libmachine: (ha-797954-m02)       <target type='serial' port='0'/>
	I0920 21:15:47.617402   29546 main.go:141] libmachine: (ha-797954-m02)     </console>
	I0920 21:15:47.617411   29546 main.go:141] libmachine: (ha-797954-m02)     <rng model='virtio'>
	I0920 21:15:47.617441   29546 main.go:141] libmachine: (ha-797954-m02)       <backend model='random'>/dev/random</backend>
	I0920 21:15:47.617464   29546 main.go:141] libmachine: (ha-797954-m02)     </rng>
	I0920 21:15:47.617472   29546 main.go:141] libmachine: (ha-797954-m02)     
	I0920 21:15:47.617481   29546 main.go:141] libmachine: (ha-797954-m02)     
	I0920 21:15:47.617489   29546 main.go:141] libmachine: (ha-797954-m02)   </devices>
	I0920 21:15:47.617497   29546 main.go:141] libmachine: (ha-797954-m02) </domain>
	I0920 21:15:47.617508   29546 main.go:141] libmachine: (ha-797954-m02) 
	I0920 21:15:47.624001   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:b4:c0:b8 in network default
	I0920 21:15:47.624558   29546 main.go:141] libmachine: (ha-797954-m02) Ensuring networks are active...
	I0920 21:15:47.624572   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:47.625191   29546 main.go:141] libmachine: (ha-797954-m02) Ensuring network default is active
	I0920 21:15:47.625499   29546 main.go:141] libmachine: (ha-797954-m02) Ensuring network mk-ha-797954 is active
	I0920 21:15:47.625846   29546 main.go:141] libmachine: (ha-797954-m02) Getting domain xml...
	I0920 21:15:47.626575   29546 main.go:141] libmachine: (ha-797954-m02) Creating domain...
	I0920 21:15:48.860761   29546 main.go:141] libmachine: (ha-797954-m02) Waiting to get IP...
	I0920 21:15:48.861461   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:48.861850   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:48.861881   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:48.861842   29933 retry.go:31] will retry after 246.176053ms: waiting for machine to come up
	I0920 21:15:49.109266   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:49.109702   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:49.109741   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:49.109653   29933 retry.go:31] will retry after 305.989794ms: waiting for machine to come up
	I0920 21:15:49.417320   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:49.417758   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:49.417775   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:49.417722   29933 retry.go:31] will retry after 399.439851ms: waiting for machine to come up
	I0920 21:15:49.818420   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:49.818870   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:49.818890   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:49.818832   29933 retry.go:31] will retry after 366.677131ms: waiting for machine to come up
	I0920 21:15:50.187440   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:50.187949   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:50.187984   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:50.187899   29933 retry.go:31] will retry after 641.811509ms: waiting for machine to come up
	I0920 21:15:50.831780   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:50.832244   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:50.832266   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:50.832186   29933 retry.go:31] will retry after 849.351875ms: waiting for machine to come up
	I0920 21:15:51.683009   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:51.683497   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:51.683523   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:51.683449   29933 retry.go:31] will retry after 880.265456ms: waiting for machine to come up
	I0920 21:15:52.564861   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:52.565353   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:52.565377   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:52.565317   29933 retry.go:31] will retry after 1.175823596s: waiting for machine to come up
	I0920 21:15:53.742596   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:53.743093   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:53.743120   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:53.743040   29933 retry.go:31] will retry after 1.383827805s: waiting for machine to come up
	I0920 21:15:55.128520   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:55.129000   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:55.129028   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:55.128941   29933 retry.go:31] will retry after 2.175983574s: waiting for machine to come up
	I0920 21:15:57.306647   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:57.307130   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:57.307166   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:57.307067   29933 retry.go:31] will retry after 2.423186714s: waiting for machine to come up
	I0920 21:15:59.731527   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:59.731911   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:59.731934   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:59.731876   29933 retry.go:31] will retry after 3.330564372s: waiting for machine to come up
	I0920 21:16:03.063640   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:03.063984   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:16:03.064008   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:16:03.063942   29933 retry.go:31] will retry after 4.208745782s: waiting for machine to come up
	I0920 21:16:07.274472   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:07.274913   29546 main.go:141] libmachine: (ha-797954-m02) Found IP for machine: 192.168.39.63
	I0920 21:16:07.274942   29546 main.go:141] libmachine: (ha-797954-m02) Reserving static IP address...
	I0920 21:16:07.274956   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has current primary IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:07.275334   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find host DHCP lease matching {name: "ha-797954-m02", mac: "52:54:00:53:e9:69", ip: "192.168.39.63"} in network mk-ha-797954
	I0920 21:16:07.352055   29546 main.go:141] libmachine: (ha-797954-m02) Reserved static IP address: 192.168.39.63
	I0920 21:16:07.352089   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Getting to WaitForSSH function...
	I0920 21:16:07.352098   29546 main.go:141] libmachine: (ha-797954-m02) Waiting for SSH to be available...
	I0920 21:16:07.354681   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:07.355003   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954
	I0920 21:16:07.355033   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find defined IP address of network mk-ha-797954 interface with MAC address 52:54:00:53:e9:69
	I0920 21:16:07.355155   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using SSH client type: external
	I0920 21:16:07.355181   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa (-rw-------)
	I0920 21:16:07.355212   29546 main.go:141] libmachine: (ha-797954-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:16:07.355231   29546 main.go:141] libmachine: (ha-797954-m02) DBG | About to run SSH command:
	I0920 21:16:07.355247   29546 main.go:141] libmachine: (ha-797954-m02) DBG | exit 0
	I0920 21:16:07.358863   29546 main.go:141] libmachine: (ha-797954-m02) DBG | SSH cmd err, output: exit status 255: 
	I0920 21:16:07.358889   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0920 21:16:07.358899   29546 main.go:141] libmachine: (ha-797954-m02) DBG | command : exit 0
	I0920 21:16:07.358910   29546 main.go:141] libmachine: (ha-797954-m02) DBG | err     : exit status 255
	I0920 21:16:07.358922   29546 main.go:141] libmachine: (ha-797954-m02) DBG | output  : 
	I0920 21:16:10.359142   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Getting to WaitForSSH function...
	I0920 21:16:10.362209   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.362612   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.362643   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.362812   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using SSH client type: external
	I0920 21:16:10.362833   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa (-rw-------)
	I0920 21:16:10.362856   29546 main.go:141] libmachine: (ha-797954-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:16:10.362864   29546 main.go:141] libmachine: (ha-797954-m02) DBG | About to run SSH command:
	I0920 21:16:10.362874   29546 main.go:141] libmachine: (ha-797954-m02) DBG | exit 0
	I0920 21:16:10.491233   29546 main.go:141] libmachine: (ha-797954-m02) DBG | SSH cmd err, output: <nil>: 
	I0920 21:16:10.491477   29546 main.go:141] libmachine: (ha-797954-m02) KVM machine creation complete!
	I0920 21:16:10.491754   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetConfigRaw
	I0920 21:16:10.492323   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:10.492507   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:10.492661   29546 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 21:16:10.492673   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetState
	I0920 21:16:10.493804   29546 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 21:16:10.493816   29546 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 21:16:10.493821   29546 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 21:16:10.493826   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.495917   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.496311   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.496341   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.496456   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:10.496641   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.496775   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.496906   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:10.497045   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:10.497249   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:10.497260   29546 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 21:16:10.606330   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:16:10.606353   29546 main.go:141] libmachine: Detecting the provisioner...
	I0920 21:16:10.606360   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.608723   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.609117   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.609139   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.609298   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:10.609503   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.609652   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.609759   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:10.609876   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:10.610071   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:10.610085   29546 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 21:16:10.724120   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 21:16:10.724202   29546 main.go:141] libmachine: found compatible host: buildroot
	I0920 21:16:10.724212   29546 main.go:141] libmachine: Provisioning with buildroot...
	I0920 21:16:10.724219   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetMachineName
	I0920 21:16:10.724500   29546 buildroot.go:166] provisioning hostname "ha-797954-m02"
	I0920 21:16:10.724528   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetMachineName
	I0920 21:16:10.724726   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.727578   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.728013   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.728055   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.728155   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:10.728348   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.728482   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.728598   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:10.728757   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:10.728927   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:10.728942   29546 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-797954-m02 && echo "ha-797954-m02" | sudo tee /etc/hostname
	I0920 21:16:10.853869   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-797954-m02
	
	I0920 21:16:10.853893   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.856573   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.856928   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.856950   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.857120   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:10.857298   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.857491   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.857632   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:10.857799   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:10.858001   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:10.858027   29546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-797954-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-797954-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-797954-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 21:16:10.976857   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:16:10.976894   29546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 21:16:10.976921   29546 buildroot.go:174] setting up certificates
	I0920 21:16:10.976930   29546 provision.go:84] configureAuth start
	I0920 21:16:10.976939   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetMachineName
	I0920 21:16:10.977193   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetIP
	I0920 21:16:10.979787   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.980127   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.980156   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.980310   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.982562   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.982952   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.982987   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.983144   29546 provision.go:143] copyHostCerts
	I0920 21:16:10.983172   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:16:10.983211   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 21:16:10.983220   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:16:10.983297   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 21:16:10.983390   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:16:10.983415   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 21:16:10.983423   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:16:10.983457   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 21:16:10.983516   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:16:10.983538   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 21:16:10.983547   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:16:10.983578   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 21:16:10.983645   29546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.ha-797954-m02 san=[127.0.0.1 192.168.39.63 ha-797954-m02 localhost minikube]
	I0920 21:16:11.277389   29546 provision.go:177] copyRemoteCerts
	I0920 21:16:11.277450   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 21:16:11.277472   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.280305   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.280630   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.280660   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.280787   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.280985   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.281128   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.281262   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa Username:docker}
	I0920 21:16:11.365413   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 21:16:11.365500   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 21:16:11.392730   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 21:16:11.392804   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 21:16:11.418158   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 21:16:11.418231   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 21:16:11.442606   29546 provision.go:87] duration metric: took 465.662046ms to configureAuth
	I0920 21:16:11.442634   29546 buildroot.go:189] setting minikube options for container-runtime
	I0920 21:16:11.442797   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:16:11.442863   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.445626   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.446004   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.446037   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.446193   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.446437   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.446574   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.446682   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.446829   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:11.447016   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:11.447037   29546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 21:16:11.680912   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 21:16:11.680937   29546 main.go:141] libmachine: Checking connection to Docker...
	I0920 21:16:11.680947   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetURL
	I0920 21:16:11.682272   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using libvirt version 6000000
	I0920 21:16:11.684914   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.685189   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.685217   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.685358   29546 main.go:141] libmachine: Docker is up and running!
	I0920 21:16:11.685371   29546 main.go:141] libmachine: Reticulating splines...
	I0920 21:16:11.685378   29546 client.go:171] duration metric: took 24.476526726s to LocalClient.Create
	I0920 21:16:11.685403   29546 start.go:167] duration metric: took 24.47659369s to libmachine.API.Create "ha-797954"
	I0920 21:16:11.685417   29546 start.go:293] postStartSetup for "ha-797954-m02" (driver="kvm2")
	I0920 21:16:11.685429   29546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 21:16:11.685450   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.685687   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 21:16:11.685714   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.687637   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.687978   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.688009   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.688123   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.688289   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.688429   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.688559   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa Username:docker}
	I0920 21:16:11.773487   29546 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 21:16:11.777926   29546 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 21:16:11.777968   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 21:16:11.778045   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 21:16:11.778142   29546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 21:16:11.778153   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /etc/ssl/certs/167852.pem
	I0920 21:16:11.778236   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 21:16:11.787777   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:16:11.814167   29546 start.go:296] duration metric: took 128.737842ms for postStartSetup
	I0920 21:16:11.814211   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetConfigRaw
	I0920 21:16:11.814756   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetIP
	I0920 21:16:11.817446   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.817813   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.817843   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.818091   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:16:11.818271   29546 start.go:128] duration metric: took 24.627729213s to createHost
	I0920 21:16:11.818293   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.820157   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.820420   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.820457   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.820597   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.820754   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.820876   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.820975   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.821154   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:11.821361   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:11.821372   29546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 21:16:11.935903   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726866971.911740237
	
	I0920 21:16:11.935925   29546 fix.go:216] guest clock: 1726866971.911740237
	I0920 21:16:11.935931   29546 fix.go:229] Guest: 2024-09-20 21:16:11.911740237 +0000 UTC Remote: 2024-09-20 21:16:11.818282234 +0000 UTC m=+73.759482878 (delta=93.458003ms)
	I0920 21:16:11.935947   29546 fix.go:200] guest clock delta is within tolerance: 93.458003ms
	I0920 21:16:11.935951   29546 start.go:83] releasing machines lock for "ha-797954-m02", held for 24.74549273s
	I0920 21:16:11.935977   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.936247   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetIP
	I0920 21:16:11.938751   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.939097   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.939121   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.941536   29546 out.go:177] * Found network options:
	I0920 21:16:11.943384   29546 out.go:177]   - NO_PROXY=192.168.39.240
	W0920 21:16:11.944703   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 21:16:11.944730   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.945270   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.945436   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.945528   29546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 21:16:11.945565   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	W0920 21:16:11.945595   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 21:16:11.945670   29546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 21:16:11.945694   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.948268   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.948343   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.948669   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.948696   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.948719   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.948739   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.948891   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.948988   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.949074   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.949141   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.949205   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.949260   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.949324   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa Username:docker}
	I0920 21:16:11.949401   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa Username:docker}
	I0920 21:16:12.182357   29546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 21:16:12.188336   29546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 21:16:12.188410   29546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 21:16:12.207463   29546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 21:16:12.207490   29546 start.go:495] detecting cgroup driver to use...
	I0920 21:16:12.207553   29546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 21:16:12.224528   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 21:16:12.239330   29546 docker.go:217] disabling cri-docker service (if available) ...
	I0920 21:16:12.239398   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 21:16:12.254085   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 21:16:12.268825   29546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 21:16:12.382055   29546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 21:16:12.535413   29546 docker.go:233] disabling docker service ...
	I0920 21:16:12.535492   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 21:16:12.551349   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 21:16:12.564334   29546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 21:16:12.703477   29546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 21:16:12.833908   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 21:16:12.848370   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 21:16:12.867636   29546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 21:16:12.867695   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.878698   29546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 21:16:12.878757   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.889749   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.900887   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.911745   29546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 21:16:12.922758   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.933646   29546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.951100   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.961869   29546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 21:16:12.971480   29546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 21:16:12.971539   29546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 21:16:12.984107   29546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 21:16:12.993914   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:16:13.101378   29546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 21:16:13.200043   29546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 21:16:13.200113   29546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 21:16:13.204910   29546 start.go:563] Will wait 60s for crictl version
	I0920 21:16:13.204975   29546 ssh_runner.go:195] Run: which crictl
	I0920 21:16:13.208759   29546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 21:16:13.247360   29546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 21:16:13.247450   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:16:13.276920   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:16:13.308600   29546 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 21:16:13.310088   29546 out.go:177]   - env NO_PROXY=192.168.39.240
	I0920 21:16:13.311453   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetIP
	I0920 21:16:13.313821   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:13.314157   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:13.314177   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:13.314359   29546 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 21:16:13.318746   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:16:13.331582   29546 mustload.go:65] Loading cluster: ha-797954
	I0920 21:16:13.331825   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:16:13.332085   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:16:13.332122   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:16:13.347535   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0920 21:16:13.348063   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:16:13.348528   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:16:13.348548   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:16:13.348889   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:16:13.349049   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:16:13.350568   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:16:13.350840   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:16:13.350872   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:16:13.365373   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I0920 21:16:13.365792   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:16:13.366261   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:16:13.366281   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:16:13.366592   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:16:13.366760   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:16:13.366907   29546 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954 for IP: 192.168.39.63
	I0920 21:16:13.366920   29546 certs.go:194] generating shared ca certs ...
	I0920 21:16:13.366937   29546 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:16:13.367097   29546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 21:16:13.367133   29546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 21:16:13.367142   29546 certs.go:256] generating profile certs ...
	I0920 21:16:13.367211   29546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key
	I0920 21:16:13.367235   29546 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.6519369c
	I0920 21:16:13.367251   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.6519369c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.63 192.168.39.254]
	I0920 21:16:13.455682   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.6519369c ...
	I0920 21:16:13.455713   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.6519369c: {Name:mkf7fe9b668446e8a4052aa048fbbf7677e2fbab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:16:13.455914   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.6519369c ...
	I0920 21:16:13.455929   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.6519369c: {Name:mk7e6a911b0af488356d39a10736e055b37e6a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:16:13.456006   29546 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.6519369c -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt
	I0920 21:16:13.456131   29546 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.6519369c -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key
	I0920 21:16:13.456249   29546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key
	I0920 21:16:13.456263   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 21:16:13.456276   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 21:16:13.456287   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 21:16:13.456297   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 21:16:13.456308   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 21:16:13.456317   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 21:16:13.456327   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 21:16:13.456340   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 21:16:13.456384   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 21:16:13.456413   29546 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 21:16:13.456423   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 21:16:13.456445   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 21:16:13.456466   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 21:16:13.456487   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 21:16:13.456523   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:16:13.456549   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /usr/share/ca-certificates/167852.pem
	I0920 21:16:13.456563   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:16:13.456574   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem -> /usr/share/ca-certificates/16785.pem
	I0920 21:16:13.456603   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:16:13.459546   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:16:13.459920   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:16:13.459938   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:16:13.460181   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:16:13.460348   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:16:13.460507   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:16:13.460594   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:16:13.535461   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 21:16:13.541124   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 21:16:13.556682   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 21:16:13.564013   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0920 21:16:13.575182   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 21:16:13.579951   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 21:16:13.591192   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 21:16:13.595498   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0920 21:16:13.606893   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 21:16:13.611454   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 21:16:13.622000   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 21:16:13.627844   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0920 21:16:13.640161   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 21:16:13.667082   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 21:16:13.691675   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 21:16:13.718752   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 21:16:13.742806   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 21:16:13.767051   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 21:16:13.790981   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 21:16:13.815475   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 21:16:13.840035   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 21:16:13.864033   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 21:16:13.887540   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 21:16:13.912601   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 21:16:13.929455   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0920 21:16:13.946232   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 21:16:13.962927   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0920 21:16:13.979514   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 21:16:13.996480   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0920 21:16:14.013458   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 21:16:14.030697   29546 ssh_runner.go:195] Run: openssl version
	I0920 21:16:14.036567   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 21:16:14.047523   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 21:16:14.052253   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:16:14.052312   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 21:16:14.058027   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 21:16:14.068874   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 21:16:14.080054   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:16:14.084884   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:16:14.084969   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:16:14.091061   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 21:16:14.102348   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 21:16:14.113259   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 21:16:14.117584   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:16:14.117628   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 21:16:14.123165   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 21:16:14.133845   29546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:16:14.137859   29546 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 21:16:14.137916   29546 kubeadm.go:934] updating node {m02 192.168.39.63 8443 v1.31.1 crio true true} ...
	I0920 21:16:14.138011   29546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-797954-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 21:16:14.138044   29546 kube-vip.go:115] generating kube-vip config ...
	I0920 21:16:14.138082   29546 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 21:16:14.157200   29546 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 21:16:14.157256   29546 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 21:16:14.157308   29546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 21:16:14.167159   29546 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 21:16:14.167253   29546 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 21:16:14.176775   29546 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 21:16:14.176800   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 21:16:14.176860   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 21:16:14.176869   29546 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0920 21:16:14.176896   29546 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0920 21:16:14.181293   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 21:16:14.181317   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 21:16:14.830604   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 21:16:14.830707   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 21:16:14.836995   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 21:16:14.837029   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 21:16:14.885458   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:16:14.921200   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 21:16:14.921297   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 21:16:14.928788   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 21:16:14.928843   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 21:16:15.405307   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 21:16:15.415161   29546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 21:16:15.432203   29546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 21:16:15.449139   29546 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 21:16:15.466408   29546 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 21:16:15.470447   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:16:15.483385   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:16:15.605870   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:16:15.622726   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:16:15.623063   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:16:15.623100   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:16:15.638072   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45517
	I0920 21:16:15.638612   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:16:15.639116   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:16:15.639140   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:16:15.639467   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:16:15.639639   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:16:15.639767   29546 start.go:317] joinCluster: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:16:15.639881   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 21:16:15.639903   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:16:15.642798   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:16:15.643197   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:16:15.643225   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:16:15.643351   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:16:15.643513   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:16:15.643635   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:16:15.643759   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:16:15.792657   29546 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:16:15.792709   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kdqr52.0bg1ag3l84e32pbu --discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-797954-m02 --control-plane --apiserver-advertise-address=192.168.39.63 --apiserver-bind-port=8443"
	I0920 21:16:37.363850   29546 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kdqr52.0bg1ag3l84e32pbu --discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-797954-m02 --control-plane --apiserver-advertise-address=192.168.39.63 --apiserver-bind-port=8443": (21.571119677s)
	I0920 21:16:37.363880   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 21:16:37.826315   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-797954-m02 minikube.k8s.io/updated_at=2024_09_20T21_16_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=ha-797954 minikube.k8s.io/primary=false
	I0920 21:16:37.965429   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-797954-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 21:16:38.074454   29546 start.go:319] duration metric: took 22.434680822s to joinCluster
	I0920 21:16:38.074530   29546 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:16:38.074847   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:16:38.076237   29546 out.go:177] * Verifying Kubernetes components...
	I0920 21:16:38.077911   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:16:38.297124   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:16:38.315206   29546 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:16:38.315425   29546 kapi.go:59] client config for ha-797954: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key", CAFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 21:16:38.315479   29546 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.240:8443
	I0920 21:16:38.315667   29546 node_ready.go:35] waiting up to 6m0s for node "ha-797954-m02" to be "Ready" ...
	I0920 21:16:38.315752   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:38.315761   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:38.315768   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:38.315773   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:38.327510   29546 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0920 21:16:38.816509   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:38.816531   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:38.816539   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:38.816543   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:38.820223   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:39.316657   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:39.316678   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:39.316687   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:39.316692   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:39.320207   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:39.816039   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:39.816063   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:39.816076   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:39.816081   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:39.819353   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:40.316075   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:40.316098   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:40.316106   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:40.316111   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:40.319539   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:40.320391   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:40.816105   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:40.816128   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:40.816139   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:40.816143   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:40.819586   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:41.316211   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:41.316235   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:41.316245   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:41.316249   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:41.349189   29546 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0920 21:16:41.816015   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:41.816038   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:41.816046   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:41.816049   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:41.819036   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:42.316442   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:42.316466   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:42.316476   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:42.316482   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:42.319998   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:42.320895   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:42.816283   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:42.816308   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:42.816320   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:42.816325   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:42.819743   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:43.316284   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:43.316312   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:43.316323   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:43.316329   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:43.319785   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:43.816709   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:43.816731   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:43.816742   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:43.816747   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:43.820871   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:16:44.316171   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:44.316193   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:44.316204   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:44.316208   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:44.319512   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:44.816802   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:44.816826   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:44.816834   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:44.816839   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:44.820164   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:44.820813   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:45.315957   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:45.315983   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:45.315990   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:45.315994   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:45.322831   29546 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 21:16:45.816465   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:45.816487   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:45.816493   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:45.816499   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:45.820400   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:46.316676   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:46.316696   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:46.316705   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:46.316710   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:46.320533   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:46.816215   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:46.816236   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:46.816245   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:46.816250   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:46.820038   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:47.316157   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:47.316175   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:47.316182   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:47.316187   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:47.319007   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:47.319706   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:47.816061   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:47.816084   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:47.816093   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:47.816099   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:47.819550   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:48.316265   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:48.316297   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:48.316309   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:48.316315   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:48.324229   29546 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 21:16:48.816174   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:48.816197   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:48.816207   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:48.816212   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:48.819164   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:49.316193   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:49.316219   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:49.316227   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:49.316231   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:49.319642   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:49.320250   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:49.816604   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:49.816629   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:49.816639   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:49.816643   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:49.820105   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:50.316008   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:50.316032   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:50.316043   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:50.316050   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:50.320037   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:50.816542   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:50.816569   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:50.816578   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:50.816584   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:50.819913   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:51.316794   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:51.316833   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:51.316842   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:51.316847   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:51.320154   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:51.320874   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:51.816190   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:51.816232   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:51.816240   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:51.816245   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:51.819606   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:52.316313   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:52.316334   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:52.316342   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:52.316345   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:52.319035   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:52.815947   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:52.815976   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:52.815984   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:52.815989   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:52.819657   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:53.316260   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:53.316286   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:53.316296   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:53.316302   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:53.319552   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:53.816326   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:53.816347   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:53.816357   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:53.816361   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:53.820253   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:53.820949   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:54.316293   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:54.316325   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:54.316345   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:54.316351   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:54.320156   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:54.816089   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:54.816111   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:54.816119   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:54.816122   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:54.819881   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:55.316810   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:55.316833   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:55.316841   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:55.316844   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:55.320308   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:55.816768   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:55.816797   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:55.816808   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:55.816814   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:55.820631   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:55.821331   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:56.315901   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:56.315944   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:56.315956   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:56.315969   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:56.320010   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:16:56.816119   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:56.816140   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:56.816154   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:56.816158   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:56.820007   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:57.316463   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:57.316490   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:57.316501   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:57.316509   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:57.320023   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:57.815912   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:57.815949   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:57.815957   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:57.815962   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:57.819365   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:58.316195   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:58.316218   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.316226   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.316230   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.320091   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:58.320848   29546 node_ready.go:49] node "ha-797954-m02" has status "Ready":"True"
	I0920 21:16:58.320865   29546 node_ready.go:38] duration metric: took 20.005171668s for node "ha-797954-m02" to be "Ready" ...
	I0920 21:16:58.320873   29546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 21:16:58.320944   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:16:58.320955   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.320962   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.320966   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.326211   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:16:58.332482   29546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.332585   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bkcf8
	I0920 21:16:58.332594   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.332601   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.332605   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.336131   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:58.337007   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.337022   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.337031   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.337034   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.339737   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.340269   29546 pod_ready.go:93] pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.340285   29546 pod_ready.go:82] duration metric: took 7.773484ms for pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.340294   29546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.340345   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-djhfr
	I0920 21:16:58.340352   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.340359   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.340363   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.343186   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.343740   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.343754   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.343761   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.343764   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.346434   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.347695   29546 pod_ready.go:93] pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.347713   29546 pod_ready.go:82] duration metric: took 7.413319ms for pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.347723   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.347819   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954
	I0920 21:16:58.347828   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.347835   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.347838   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.350799   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.351620   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.351636   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.351645   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.351650   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.354248   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.354637   29546 pod_ready.go:93] pod "etcd-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.354653   29546 pod_ready.go:82] duration metric: took 6.924523ms for pod "etcd-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.354662   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.354733   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954-m02
	I0920 21:16:58.354740   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.354747   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.354750   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.357249   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.357873   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:58.357891   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.357902   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.357909   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.360353   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.360737   29546 pod_ready.go:93] pod "etcd-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.360754   29546 pod_ready.go:82] duration metric: took 6.065117ms for pod "etcd-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.360772   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.517254   29546 request.go:632] Waited for 156.41306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954
	I0920 21:16:58.517351   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954
	I0920 21:16:58.517362   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.517375   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.517386   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.520791   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:58.716729   29546 request.go:632] Waited for 195.35622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.716806   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.716813   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.716825   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.716832   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.720994   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:16:58.721498   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.721522   29546 pod_ready.go:82] duration metric: took 360.733755ms for pod "kube-apiserver-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.721535   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.916778   29546 request.go:632] Waited for 195.15835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m02
	I0920 21:16:58.916853   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m02
	I0920 21:16:58.916860   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.916871   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.916879   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.920176   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.116231   29546 request.go:632] Waited for 195.32753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:59.116326   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:59.116335   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.116348   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.116363   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.120233   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.120679   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:59.120701   29546 pod_ready.go:82] duration metric: took 399.159254ms for pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.120711   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.316770   29546 request.go:632] Waited for 195.998601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954
	I0920 21:16:59.316829   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954
	I0920 21:16:59.316835   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.316843   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.316848   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.320380   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.516308   29546 request.go:632] Waited for 195.285053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:59.516365   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:59.516370   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.516385   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.516408   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.519762   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.520332   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:59.520354   29546 pod_ready.go:82] duration metric: took 399.636336ms for pod "kube-controller-manager-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.520366   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.716282   29546 request.go:632] Waited for 195.838293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m02
	I0920 21:16:59.716348   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m02
	I0920 21:16:59.716355   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.716364   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.716369   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.719757   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.916877   29546 request.go:632] Waited for 196.253642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:59.916926   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:59.916930   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.916941   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.916954   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.920360   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.920904   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:59.920927   29546 pod_ready.go:82] duration metric: took 400.552955ms for pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.920940   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2pjxq" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.116981   29546 request.go:632] Waited for 195.965997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pjxq
	I0920 21:17:00.117046   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pjxq
	I0920 21:17:00.117052   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.117059   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.117064   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.120477   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:00.316320   29546 request.go:632] Waited for 195.169192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:17:00.316423   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:17:00.316431   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.316441   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.316453   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.319724   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:00.320229   29546 pod_ready.go:93] pod "kube-proxy-2pjxq" in "kube-system" namespace has status "Ready":"True"
	I0920 21:17:00.320259   29546 pod_ready.go:82] duration metric: took 399.311666ms for pod "kube-proxy-2pjxq" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.320270   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qx9ph" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.516281   29546 request.go:632] Waited for 195.931067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qx9ph
	I0920 21:17:00.516373   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qx9ph
	I0920 21:17:00.516383   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.516394   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.516403   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.521923   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:17:00.716927   29546 request.go:632] Waited for 194.244639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:17:00.716994   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:17:00.717002   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.717011   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.717027   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.720660   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:00.721230   29546 pod_ready.go:93] pod "kube-proxy-qx9ph" in "kube-system" namespace has status "Ready":"True"
	I0920 21:17:00.721249   29546 pod_ready.go:82] duration metric: took 400.97344ms for pod "kube-proxy-qx9ph" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.721259   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.916498   29546 request.go:632] Waited for 195.181307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954
	I0920 21:17:00.916573   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954
	I0920 21:17:00.916580   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.916588   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.916592   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.920596   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:01.116571   29546 request.go:632] Waited for 195.40032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:17:01.116628   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:17:01.116634   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.116641   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.116643   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.119984   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:01.120450   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:17:01.120470   29546 pod_ready.go:82] duration metric: took 399.204122ms for pod "kube-scheduler-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:01.120483   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:01.316631   29546 request.go:632] Waited for 196.084204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m02
	I0920 21:17:01.316697   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m02
	I0920 21:17:01.316703   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.316710   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.316712   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.320693   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:01.516582   29546 request.go:632] Waited for 195.353132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:17:01.516664   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:17:01.516673   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.516684   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.516692   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.519980   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:01.520627   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:17:01.520645   29546 pod_ready.go:82] duration metric: took 400.154808ms for pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:01.520658   29546 pod_ready.go:39] duration metric: took 3.19977376s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 21:17:01.520675   29546 api_server.go:52] waiting for apiserver process to appear ...
	I0920 21:17:01.520730   29546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 21:17:01.537086   29546 api_server.go:72] duration metric: took 23.462517508s to wait for apiserver process to appear ...
	I0920 21:17:01.537114   29546 api_server.go:88] waiting for apiserver healthz status ...
	I0920 21:17:01.537136   29546 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0920 21:17:01.541714   29546 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0920 21:17:01.541802   29546 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0920 21:17:01.541813   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.541825   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.541834   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.542686   29546 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 21:17:01.542806   29546 api_server.go:141] control plane version: v1.31.1
	I0920 21:17:01.542827   29546 api_server.go:131] duration metric: took 5.705549ms to wait for apiserver health ...
	I0920 21:17:01.542841   29546 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 21:17:01.717246   29546 request.go:632] Waited for 174.332661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:17:01.717348   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:17:01.717358   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.717367   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.717379   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.722159   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:01.726592   29546 system_pods.go:59] 17 kube-system pods found
	I0920 21:17:01.726620   29546 system_pods.go:61] "coredns-7c65d6cfc9-bkcf8" [5d5aca96-de3a-4154-b195-b69aa45e2cfc] Running
	I0920 21:17:01.726625   29546 system_pods.go:61] "coredns-7c65d6cfc9-djhfr" [a8ca35db-3f4c-4e4c-a9e4-e12abecff539] Running
	I0920 21:17:01.726629   29546 system_pods.go:61] "etcd-ha-797954" [ce647dfd-9ff8-483e-ac64-e6cbed777dfa] Running
	I0920 21:17:01.726632   29546 system_pods.go:61] "etcd-ha-797954-m02" [d8c6a164-a169-482f-a05f-0e723b947424] Running
	I0920 21:17:01.726636   29546 system_pods.go:61] "kindnet-6lk2f" [ddd6c393-68ce-4fff-9f0d-6e0147e03926] Running
	I0920 21:17:01.726639   29546 system_pods.go:61] "kindnet-qh5rt" [2036234c-15c7-43da-ac52-cdac77f1a721] Running
	I0920 21:17:01.726642   29546 system_pods.go:61] "kube-apiserver-ha-797954" [91b4d438-70c1-4335-a0eb-9d48bd470b37] Running
	I0920 21:17:01.726645   29546 system_pods.go:61] "kube-apiserver-ha-797954-m02" [d0634b88-e668-4be0-b175-543f5f496efa] Running
	I0920 21:17:01.726650   29546 system_pods.go:61] "kube-controller-manager-ha-797954" [7bcf4459-7067-4f10-a79d-a4cd0017fffb] Running
	I0920 21:17:01.726654   29546 system_pods.go:61] "kube-controller-manager-ha-797954-m02" [4366a95b-15ec-42c7-916d-14eb0dba314d] Running
	I0920 21:17:01.726657   29546 system_pods.go:61] "kube-proxy-2pjxq" [bbd40eaa-4d03-491a-96c6-ef4cedc6401a] Running
	I0920 21:17:01.726660   29546 system_pods.go:61] "kube-proxy-qx9ph" [b3d20fb2-8eff-4921-add4-ee36b8fba334] Running
	I0920 21:17:01.726662   29546 system_pods.go:61] "kube-scheduler-ha-797954" [a55760dc-c4e4-4cec-b8d1-04512bd9ad33] Running
	I0920 21:17:01.726665   29546 system_pods.go:61] "kube-scheduler-ha-797954-m02" [4a53607e-2d10-4fcb-8697-bf042189871d] Running
	I0920 21:17:01.726668   29546 system_pods.go:61] "kube-vip-ha-797954" [ad002c23-a891-4bf6-bbb7-bba95f43b1d0] Running
	I0920 21:17:01.726671   29546 system_pods.go:61] "kube-vip-ha-797954-m02" [09b5b6c2-96e7-43f1-a53e-21513fc048ed] Running
	I0920 21:17:01.726674   29546 system_pods.go:61] "storage-provisioner" [05385037-e9ea-4afb-a256-3bfae42e00f5] Running
	I0920 21:17:01.726679   29546 system_pods.go:74] duration metric: took 183.829123ms to wait for pod list to return data ...
	I0920 21:17:01.726686   29546 default_sa.go:34] waiting for default service account to be created ...
	I0920 21:17:01.917125   29546 request.go:632] Waited for 190.363748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0920 21:17:01.917200   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0920 21:17:01.917205   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.917212   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.917217   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.921868   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:01.922074   29546 default_sa.go:45] found service account: "default"
	I0920 21:17:01.922087   29546 default_sa.go:55] duration metric: took 195.393903ms for default service account to be created ...
	I0920 21:17:01.922095   29546 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 21:17:02.116523   29546 request.go:632] Waited for 194.354422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:17:02.116574   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:17:02.116579   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:02.116585   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:02.116590   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:02.121031   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:02.127015   29546 system_pods.go:86] 17 kube-system pods found
	I0920 21:17:02.127040   29546 system_pods.go:89] "coredns-7c65d6cfc9-bkcf8" [5d5aca96-de3a-4154-b195-b69aa45e2cfc] Running
	I0920 21:17:02.127047   29546 system_pods.go:89] "coredns-7c65d6cfc9-djhfr" [a8ca35db-3f4c-4e4c-a9e4-e12abecff539] Running
	I0920 21:17:02.127052   29546 system_pods.go:89] "etcd-ha-797954" [ce647dfd-9ff8-483e-ac64-e6cbed777dfa] Running
	I0920 21:17:02.127056   29546 system_pods.go:89] "etcd-ha-797954-m02" [d8c6a164-a169-482f-a05f-0e723b947424] Running
	I0920 21:17:02.127060   29546 system_pods.go:89] "kindnet-6lk2f" [ddd6c393-68ce-4fff-9f0d-6e0147e03926] Running
	I0920 21:17:02.127064   29546 system_pods.go:89] "kindnet-qh5rt" [2036234c-15c7-43da-ac52-cdac77f1a721] Running
	I0920 21:17:02.127067   29546 system_pods.go:89] "kube-apiserver-ha-797954" [91b4d438-70c1-4335-a0eb-9d48bd470b37] Running
	I0920 21:17:02.127071   29546 system_pods.go:89] "kube-apiserver-ha-797954-m02" [d0634b88-e668-4be0-b175-543f5f496efa] Running
	I0920 21:17:02.127076   29546 system_pods.go:89] "kube-controller-manager-ha-797954" [7bcf4459-7067-4f10-a79d-a4cd0017fffb] Running
	I0920 21:17:02.127079   29546 system_pods.go:89] "kube-controller-manager-ha-797954-m02" [4366a95b-15ec-42c7-916d-14eb0dba314d] Running
	I0920 21:17:02.127082   29546 system_pods.go:89] "kube-proxy-2pjxq" [bbd40eaa-4d03-491a-96c6-ef4cedc6401a] Running
	I0920 21:17:02.127086   29546 system_pods.go:89] "kube-proxy-qx9ph" [b3d20fb2-8eff-4921-add4-ee36b8fba334] Running
	I0920 21:17:02.127089   29546 system_pods.go:89] "kube-scheduler-ha-797954" [a55760dc-c4e4-4cec-b8d1-04512bd9ad33] Running
	I0920 21:17:02.127093   29546 system_pods.go:89] "kube-scheduler-ha-797954-m02" [4a53607e-2d10-4fcb-8697-bf042189871d] Running
	I0920 21:17:02.127096   29546 system_pods.go:89] "kube-vip-ha-797954" [ad002c23-a891-4bf6-bbb7-bba95f43b1d0] Running
	I0920 21:17:02.127099   29546 system_pods.go:89] "kube-vip-ha-797954-m02" [09b5b6c2-96e7-43f1-a53e-21513fc048ed] Running
	I0920 21:17:02.127103   29546 system_pods.go:89] "storage-provisioner" [05385037-e9ea-4afb-a256-3bfae42e00f5] Running
	I0920 21:17:02.127109   29546 system_pods.go:126] duration metric: took 205.01008ms to wait for k8s-apps to be running ...
	I0920 21:17:02.127118   29546 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 21:17:02.127161   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:17:02.143916   29546 system_svc.go:56] duration metric: took 16.788182ms WaitForService to wait for kubelet
	I0920 21:17:02.143952   29546 kubeadm.go:582] duration metric: took 24.069387794s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 21:17:02.143974   29546 node_conditions.go:102] verifying NodePressure condition ...
	I0920 21:17:02.316696   29546 request.go:632] Waited for 172.648744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
	I0920 21:17:02.316747   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0920 21:17:02.316752   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:02.316763   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:02.316768   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:02.320760   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:02.321468   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:17:02.321490   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:17:02.321503   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:17:02.321508   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:17:02.321513   29546 node_conditions.go:105] duration metric: took 177.533781ms to run NodePressure ...
	I0920 21:17:02.321541   29546 start.go:241] waiting for startup goroutines ...
	I0920 21:17:02.321567   29546 start.go:255] writing updated cluster config ...
	I0920 21:17:02.323943   29546 out.go:201] 
	I0920 21:17:02.325378   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:17:02.325468   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:17:02.326982   29546 out.go:177] * Starting "ha-797954-m03" control-plane node in "ha-797954" cluster
	I0920 21:17:02.328317   29546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:17:02.328336   29546 cache.go:56] Caching tarball of preloaded images
	I0920 21:17:02.328441   29546 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 21:17:02.328453   29546 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 21:17:02.328536   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:17:02.328735   29546 start.go:360] acquireMachinesLock for ha-797954-m03: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 21:17:02.328796   29546 start.go:364] duration metric: took 33.998µs to acquireMachinesLock for "ha-797954-m03"
	I0920 21:17:02.328812   29546 start.go:93] Provisioning new machine with config: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:17:02.328901   29546 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0920 21:17:02.330315   29546 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 21:17:02.330414   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:17:02.330461   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:17:02.345778   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0920 21:17:02.346279   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:17:02.346747   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:17:02.346772   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:17:02.347086   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:17:02.347350   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetMachineName
	I0920 21:17:02.347468   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:02.347668   29546 start.go:159] libmachine.API.Create for "ha-797954" (driver="kvm2")
	I0920 21:17:02.347694   29546 client.go:168] LocalClient.Create starting
	I0920 21:17:02.347736   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 21:17:02.347785   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:17:02.347800   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:17:02.347860   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 21:17:02.347881   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:17:02.347891   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:17:02.347908   29546 main.go:141] libmachine: Running pre-create checks...
	I0920 21:17:02.347919   29546 main.go:141] libmachine: (ha-797954-m03) Calling .PreCreateCheck
	I0920 21:17:02.348067   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetConfigRaw
	I0920 21:17:02.348542   29546 main.go:141] libmachine: Creating machine...
	I0920 21:17:02.348556   29546 main.go:141] libmachine: (ha-797954-m03) Calling .Create
	I0920 21:17:02.348764   29546 main.go:141] libmachine: (ha-797954-m03) Creating KVM machine...
	I0920 21:17:02.350052   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found existing default KVM network
	I0920 21:17:02.350134   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found existing private KVM network mk-ha-797954
	I0920 21:17:02.350236   29546 main.go:141] libmachine: (ha-797954-m03) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03 ...
	I0920 21:17:02.350270   29546 main.go:141] libmachine: (ha-797954-m03) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 21:17:02.350391   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:02.350253   30348 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:17:02.350445   29546 main.go:141] libmachine: (ha-797954-m03) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 21:17:02.577042   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:02.576911   30348 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa...
	I0920 21:17:02.643000   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:02.642827   30348 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/ha-797954-m03.rawdisk...
	I0920 21:17:02.643045   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Writing magic tar header
	I0920 21:17:02.643062   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Writing SSH key tar header
	I0920 21:17:02.643075   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:02.642998   30348 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03 ...
	I0920 21:17:02.643166   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03
	I0920 21:17:02.643195   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03 (perms=drwx------)
	I0920 21:17:02.643206   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 21:17:02.643220   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:17:02.643226   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 21:17:02.643312   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 21:17:02.643333   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 21:17:02.643345   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 21:17:02.643356   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins
	I0920 21:17:02.643367   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 21:17:02.643381   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 21:17:02.643392   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 21:17:02.643412   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home
	I0920 21:17:02.643422   29546 main.go:141] libmachine: (ha-797954-m03) Creating domain...
	I0920 21:17:02.643428   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Skipping /home - not owner
	I0920 21:17:02.644451   29546 main.go:141] libmachine: (ha-797954-m03) define libvirt domain using xml: 
	I0920 21:17:02.644469   29546 main.go:141] libmachine: (ha-797954-m03) <domain type='kvm'>
	I0920 21:17:02.644475   29546 main.go:141] libmachine: (ha-797954-m03)   <name>ha-797954-m03</name>
	I0920 21:17:02.644480   29546 main.go:141] libmachine: (ha-797954-m03)   <memory unit='MiB'>2200</memory>
	I0920 21:17:02.644485   29546 main.go:141] libmachine: (ha-797954-m03)   <vcpu>2</vcpu>
	I0920 21:17:02.644491   29546 main.go:141] libmachine: (ha-797954-m03)   <features>
	I0920 21:17:02.644498   29546 main.go:141] libmachine: (ha-797954-m03)     <acpi/>
	I0920 21:17:02.644507   29546 main.go:141] libmachine: (ha-797954-m03)     <apic/>
	I0920 21:17:02.644515   29546 main.go:141] libmachine: (ha-797954-m03)     <pae/>
	I0920 21:17:02.644524   29546 main.go:141] libmachine: (ha-797954-m03)     
	I0920 21:17:02.644531   29546 main.go:141] libmachine: (ha-797954-m03)   </features>
	I0920 21:17:02.644536   29546 main.go:141] libmachine: (ha-797954-m03)   <cpu mode='host-passthrough'>
	I0920 21:17:02.644543   29546 main.go:141] libmachine: (ha-797954-m03)   
	I0920 21:17:02.644553   29546 main.go:141] libmachine: (ha-797954-m03)   </cpu>
	I0920 21:17:02.644561   29546 main.go:141] libmachine: (ha-797954-m03)   <os>
	I0920 21:17:02.644566   29546 main.go:141] libmachine: (ha-797954-m03)     <type>hvm</type>
	I0920 21:17:02.644573   29546 main.go:141] libmachine: (ha-797954-m03)     <boot dev='cdrom'/>
	I0920 21:17:02.644577   29546 main.go:141] libmachine: (ha-797954-m03)     <boot dev='hd'/>
	I0920 21:17:02.644582   29546 main.go:141] libmachine: (ha-797954-m03)     <bootmenu enable='no'/>
	I0920 21:17:02.644588   29546 main.go:141] libmachine: (ha-797954-m03)   </os>
	I0920 21:17:02.644595   29546 main.go:141] libmachine: (ha-797954-m03)   <devices>
	I0920 21:17:02.644606   29546 main.go:141] libmachine: (ha-797954-m03)     <disk type='file' device='cdrom'>
	I0920 21:17:02.644627   29546 main.go:141] libmachine: (ha-797954-m03)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/boot2docker.iso'/>
	I0920 21:17:02.644643   29546 main.go:141] libmachine: (ha-797954-m03)       <target dev='hdc' bus='scsi'/>
	I0920 21:17:02.644649   29546 main.go:141] libmachine: (ha-797954-m03)       <readonly/>
	I0920 21:17:02.644656   29546 main.go:141] libmachine: (ha-797954-m03)     </disk>
	I0920 21:17:02.644661   29546 main.go:141] libmachine: (ha-797954-m03)     <disk type='file' device='disk'>
	I0920 21:17:02.644670   29546 main.go:141] libmachine: (ha-797954-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 21:17:02.644678   29546 main.go:141] libmachine: (ha-797954-m03)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/ha-797954-m03.rawdisk'/>
	I0920 21:17:02.644684   29546 main.go:141] libmachine: (ha-797954-m03)       <target dev='hda' bus='virtio'/>
	I0920 21:17:02.644689   29546 main.go:141] libmachine: (ha-797954-m03)     </disk>
	I0920 21:17:02.644698   29546 main.go:141] libmachine: (ha-797954-m03)     <interface type='network'>
	I0920 21:17:02.644710   29546 main.go:141] libmachine: (ha-797954-m03)       <source network='mk-ha-797954'/>
	I0920 21:17:02.644724   29546 main.go:141] libmachine: (ha-797954-m03)       <model type='virtio'/>
	I0920 21:17:02.644734   29546 main.go:141] libmachine: (ha-797954-m03)     </interface>
	I0920 21:17:02.644742   29546 main.go:141] libmachine: (ha-797954-m03)     <interface type='network'>
	I0920 21:17:02.644747   29546 main.go:141] libmachine: (ha-797954-m03)       <source network='default'/>
	I0920 21:17:02.644754   29546 main.go:141] libmachine: (ha-797954-m03)       <model type='virtio'/>
	I0920 21:17:02.644759   29546 main.go:141] libmachine: (ha-797954-m03)     </interface>
	I0920 21:17:02.644765   29546 main.go:141] libmachine: (ha-797954-m03)     <serial type='pty'>
	I0920 21:17:02.644770   29546 main.go:141] libmachine: (ha-797954-m03)       <target port='0'/>
	I0920 21:17:02.644776   29546 main.go:141] libmachine: (ha-797954-m03)     </serial>
	I0920 21:17:02.644783   29546 main.go:141] libmachine: (ha-797954-m03)     <console type='pty'>
	I0920 21:17:02.644798   29546 main.go:141] libmachine: (ha-797954-m03)       <target type='serial' port='0'/>
	I0920 21:17:02.644809   29546 main.go:141] libmachine: (ha-797954-m03)     </console>
	I0920 21:17:02.644818   29546 main.go:141] libmachine: (ha-797954-m03)     <rng model='virtio'>
	I0920 21:17:02.644842   29546 main.go:141] libmachine: (ha-797954-m03)       <backend model='random'>/dev/random</backend>
	I0920 21:17:02.644850   29546 main.go:141] libmachine: (ha-797954-m03)     </rng>
	I0920 21:17:02.644855   29546 main.go:141] libmachine: (ha-797954-m03)     
	I0920 21:17:02.644858   29546 main.go:141] libmachine: (ha-797954-m03)     
	I0920 21:17:02.644865   29546 main.go:141] libmachine: (ha-797954-m03)   </devices>
	I0920 21:17:02.644875   29546 main.go:141] libmachine: (ha-797954-m03) </domain>
	I0920 21:17:02.644883   29546 main.go:141] libmachine: (ha-797954-m03) 
	I0920 21:17:02.651978   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:63:a3:c0 in network default
	I0920 21:17:02.652689   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:02.652709   29546 main.go:141] libmachine: (ha-797954-m03) Ensuring networks are active...
	I0920 21:17:02.653425   29546 main.go:141] libmachine: (ha-797954-m03) Ensuring network default is active
	I0920 21:17:02.653794   29546 main.go:141] libmachine: (ha-797954-m03) Ensuring network mk-ha-797954 is active
	I0920 21:17:02.654206   29546 main.go:141] libmachine: (ha-797954-m03) Getting domain xml...
	I0920 21:17:02.654891   29546 main.go:141] libmachine: (ha-797954-m03) Creating domain...
	I0920 21:17:03.891211   29546 main.go:141] libmachine: (ha-797954-m03) Waiting to get IP...
	I0920 21:17:03.891998   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:03.892564   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:03.892584   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:03.892529   30348 retry.go:31] will retry after 249.905879ms: waiting for machine to come up
	I0920 21:17:04.144187   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:04.144615   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:04.144661   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:04.144597   30348 retry.go:31] will retry after 316.328897ms: waiting for machine to come up
	I0920 21:17:04.462062   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:04.462511   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:04.462533   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:04.462462   30348 retry.go:31] will retry after 364.78104ms: waiting for machine to come up
	I0920 21:17:04.828892   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:04.829305   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:04.829331   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:04.829266   30348 retry.go:31] will retry after 450.317187ms: waiting for machine to come up
	I0920 21:17:05.280860   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:05.281373   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:05.281406   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:05.281315   30348 retry.go:31] will retry after 476.652629ms: waiting for machine to come up
	I0920 21:17:05.760024   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:05.760485   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:05.760518   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:05.760438   30348 retry.go:31] will retry after 666.650329ms: waiting for machine to come up
	I0920 21:17:06.428088   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:06.428570   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:06.428605   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:06.428544   30348 retry.go:31] will retry after 895.276152ms: waiting for machine to come up
	I0920 21:17:07.325647   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:07.326053   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:07.326072   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:07.326016   30348 retry.go:31] will retry after 1.229974937s: waiting for machine to come up
	I0920 21:17:08.557784   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:08.558294   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:08.558321   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:08.558256   30348 retry.go:31] will retry after 1.130091493s: waiting for machine to come up
	I0920 21:17:09.689958   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:09.690504   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:09.690534   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:09.690449   30348 retry.go:31] will retry after 1.704043253s: waiting for machine to come up
	I0920 21:17:11.396325   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:11.396797   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:11.396820   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:11.396759   30348 retry.go:31] will retry after 2.089116318s: waiting for machine to come up
	I0920 21:17:13.487459   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:13.487939   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:13.487964   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:13.487890   30348 retry.go:31] will retry after 2.803235066s: waiting for machine to come up
	I0920 21:17:16.292437   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:16.292814   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:16.292830   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:16.292780   30348 retry.go:31] will retry after 3.964680632s: waiting for machine to come up
	I0920 21:17:20.260043   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:20.260437   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:20.260457   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:20.260397   30348 retry.go:31] will retry after 5.26446379s: waiting for machine to come up
	I0920 21:17:25.528157   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:25.528572   29546 main.go:141] libmachine: (ha-797954-m03) Found IP for machine: 192.168.39.108
	I0920 21:17:25.528600   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has current primary IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:25.528606   29546 main.go:141] libmachine: (ha-797954-m03) Reserving static IP address...
	I0920 21:17:25.529052   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find host DHCP lease matching {name: "ha-797954-m03", mac: "52:54:00:37:e1:1a", ip: "192.168.39.108"} in network mk-ha-797954
	I0920 21:17:25.602757   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Getting to WaitForSSH function...
	I0920 21:17:25.602780   29546 main.go:141] libmachine: (ha-797954-m03) Reserved static IP address: 192.168.39.108
	I0920 21:17:25.602810   29546 main.go:141] libmachine: (ha-797954-m03) Waiting for SSH to be available...
	I0920 21:17:25.605777   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:25.606165   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954
	I0920 21:17:25.606196   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find defined IP address of network mk-ha-797954 interface with MAC address 52:54:00:37:e1:1a
	I0920 21:17:25.606399   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using SSH client type: external
	I0920 21:17:25.606418   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa (-rw-------)
	I0920 21:17:25.606467   29546 main.go:141] libmachine: (ha-797954-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:17:25.606479   29546 main.go:141] libmachine: (ha-797954-m03) DBG | About to run SSH command:
	I0920 21:17:25.606491   29546 main.go:141] libmachine: (ha-797954-m03) DBG | exit 0
	I0920 21:17:25.610090   29546 main.go:141] libmachine: (ha-797954-m03) DBG | SSH cmd err, output: exit status 255: 
	I0920 21:17:25.610107   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0920 21:17:25.610116   29546 main.go:141] libmachine: (ha-797954-m03) DBG | command : exit 0
	I0920 21:17:25.610123   29546 main.go:141] libmachine: (ha-797954-m03) DBG | err     : exit status 255
	I0920 21:17:25.610132   29546 main.go:141] libmachine: (ha-797954-m03) DBG | output  : 
	I0920 21:17:28.610361   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Getting to WaitForSSH function...
	I0920 21:17:28.612774   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.613300   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:28.613345   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.613482   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using SSH client type: external
	I0920 21:17:28.613508   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa (-rw-------)
	I0920 21:17:28.613536   29546 main.go:141] libmachine: (ha-797954-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:17:28.613547   29546 main.go:141] libmachine: (ha-797954-m03) DBG | About to run SSH command:
	I0920 21:17:28.613558   29546 main.go:141] libmachine: (ha-797954-m03) DBG | exit 0
	I0920 21:17:28.739518   29546 main.go:141] libmachine: (ha-797954-m03) DBG | SSH cmd err, output: <nil>: 
	I0920 21:17:28.739782   29546 main.go:141] libmachine: (ha-797954-m03) KVM machine creation complete!
	I0920 21:17:28.740050   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetConfigRaw
	I0920 21:17:28.740606   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:28.740766   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:28.740895   29546 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 21:17:28.740905   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetState
	I0920 21:17:28.742248   29546 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 21:17:28.742261   29546 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 21:17:28.742266   29546 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 21:17:28.742271   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:28.744450   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.744848   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:28.744878   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.745030   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:28.745195   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.745319   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.745434   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:28.745605   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:28.745786   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:28.745797   29546 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 21:17:28.858225   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:17:28.858252   29546 main.go:141] libmachine: Detecting the provisioner...
	I0920 21:17:28.858263   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:28.860945   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.861330   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:28.861367   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.861582   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:28.861765   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.861923   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.862049   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:28.862220   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:28.862409   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:28.862421   29546 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 21:17:28.971754   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 21:17:28.971833   29546 main.go:141] libmachine: found compatible host: buildroot
	I0920 21:17:28.971843   29546 main.go:141] libmachine: Provisioning with buildroot...
	I0920 21:17:28.971858   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetMachineName
	I0920 21:17:28.972111   29546 buildroot.go:166] provisioning hostname "ha-797954-m03"
	I0920 21:17:28.972133   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetMachineName
	I0920 21:17:28.972282   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:28.974838   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.975230   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:28.975254   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.975370   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:28.975557   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.975667   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.975797   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:28.975919   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:28.976108   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:28.976126   29546 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-797954-m03 && echo "ha-797954-m03" | sudo tee /etc/hostname
	I0920 21:17:29.102868   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-797954-m03
	
	I0920 21:17:29.102918   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.106396   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.106814   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.106838   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.107105   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:29.107316   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.107475   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.107629   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:29.107783   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:29.107967   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:29.107983   29546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-797954-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-797954-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-797954-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 21:17:29.224170   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:17:29.224200   29546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 21:17:29.224221   29546 buildroot.go:174] setting up certificates
	I0920 21:17:29.224238   29546 provision.go:84] configureAuth start
	I0920 21:17:29.224249   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetMachineName
	I0920 21:17:29.224497   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetIP
	I0920 21:17:29.228199   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.228591   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.228621   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.228737   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.230940   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.231290   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.231315   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.231436   29546 provision.go:143] copyHostCerts
	I0920 21:17:29.231461   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:17:29.231488   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 21:17:29.231497   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:17:29.231560   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 21:17:29.231661   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:17:29.231679   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 21:17:29.231683   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:17:29.231706   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 21:17:29.231764   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:17:29.231786   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 21:17:29.231795   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:17:29.231816   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 21:17:29.231871   29546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.ha-797954-m03 san=[127.0.0.1 192.168.39.108 ha-797954-m03 localhost minikube]
	I0920 21:17:29.482373   29546 provision.go:177] copyRemoteCerts
	I0920 21:17:29.482434   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 21:17:29.482467   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.485158   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.485535   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.485562   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.485735   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:29.485957   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.486154   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:29.486290   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa Username:docker}
	I0920 21:17:29.571854   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 21:17:29.571937   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 21:17:29.597446   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 21:17:29.597536   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 21:17:29.623922   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 21:17:29.624016   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 21:17:29.649582   29546 provision.go:87] duration metric: took 425.330124ms to configureAuth
	I0920 21:17:29.649619   29546 buildroot.go:189] setting minikube options for container-runtime
	I0920 21:17:29.649891   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:17:29.650000   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.652875   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.653331   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.653362   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.653612   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:29.653807   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.653962   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.654098   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:29.654239   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:29.654427   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:29.654450   29546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 21:17:29.884767   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 21:17:29.884803   29546 main.go:141] libmachine: Checking connection to Docker...
	I0920 21:17:29.884813   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetURL
	I0920 21:17:29.886065   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using libvirt version 6000000
	I0920 21:17:29.888305   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.888643   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.888670   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.888892   29546 main.go:141] libmachine: Docker is up and running!
	I0920 21:17:29.888907   29546 main.go:141] libmachine: Reticulating splines...
	I0920 21:17:29.888914   29546 client.go:171] duration metric: took 27.541211935s to LocalClient.Create
	I0920 21:17:29.888937   29546 start.go:167] duration metric: took 27.541269184s to libmachine.API.Create "ha-797954"
	I0920 21:17:29.888947   29546 start.go:293] postStartSetup for "ha-797954-m03" (driver="kvm2")
	I0920 21:17:29.888957   29546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 21:17:29.888974   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:29.889224   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 21:17:29.889254   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.891311   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.891621   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.891640   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.891801   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:29.891981   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.892102   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:29.892225   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa Username:docker}
	I0920 21:17:29.978349   29546 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 21:17:29.983427   29546 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 21:17:29.983455   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 21:17:29.983538   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 21:17:29.983634   29546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 21:17:29.983646   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /etc/ssl/certs/167852.pem
	I0920 21:17:29.983753   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 21:17:29.995291   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:17:30.022903   29546 start.go:296] duration metric: took 133.944509ms for postStartSetup
	I0920 21:17:30.022958   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetConfigRaw
	I0920 21:17:30.023537   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetIP
	I0920 21:17:30.026168   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.026517   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.026545   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.026878   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:17:30.027177   29546 start.go:128] duration metric: took 27.698265683s to createHost
	I0920 21:17:30.027205   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:30.029751   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.030099   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.030120   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.030296   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:30.030478   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:30.030638   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:30.030742   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:30.030890   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:30.031075   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:30.031086   29546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 21:17:30.140028   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726867050.118100270
	
	I0920 21:17:30.140050   29546 fix.go:216] guest clock: 1726867050.118100270
	I0920 21:17:30.140059   29546 fix.go:229] Guest: 2024-09-20 21:17:30.11810027 +0000 UTC Remote: 2024-09-20 21:17:30.027192785 +0000 UTC m=+151.968393428 (delta=90.907485ms)
	I0920 21:17:30.140081   29546 fix.go:200] guest clock delta is within tolerance: 90.907485ms
	I0920 21:17:30.140087   29546 start.go:83] releasing machines lock for "ha-797954-m03", held for 27.811283914s
	I0920 21:17:30.140109   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:30.140335   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetIP
	I0920 21:17:30.142848   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.143223   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.143247   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.145549   29546 out.go:177] * Found network options:
	I0920 21:17:30.147066   29546 out.go:177]   - NO_PROXY=192.168.39.240,192.168.39.63
	W0920 21:17:30.148256   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 21:17:30.148273   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 21:17:30.148287   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:30.148801   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:30.149026   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:30.149134   29546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 21:17:30.149168   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	W0920 21:17:30.149244   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 21:17:30.149266   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 21:17:30.149325   29546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 21:17:30.149344   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:30.151785   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.152017   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.152092   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.152116   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.152242   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:30.152398   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.152420   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.152411   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:30.152594   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:30.152619   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:30.152752   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:30.152750   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa Username:docker}
	I0920 21:17:30.152866   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:30.152988   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa Username:docker}
	I0920 21:17:30.415462   29546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 21:17:30.422013   29546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 21:17:30.422074   29546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 21:17:30.439757   29546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 21:17:30.439785   29546 start.go:495] detecting cgroup driver to use...
	I0920 21:17:30.439851   29546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 21:17:30.457470   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 21:17:30.472261   29546 docker.go:217] disabling cri-docker service (if available) ...
	I0920 21:17:30.472327   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 21:17:30.488825   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 21:17:30.504407   29546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 21:17:30.628900   29546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 21:17:30.786553   29546 docker.go:233] disabling docker service ...
	I0920 21:17:30.786634   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 21:17:30.801796   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 21:17:30.815995   29546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 21:17:30.961390   29546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 21:17:31.087877   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 21:17:31.103154   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 21:17:31.123733   29546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 21:17:31.123806   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.135304   29546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 21:17:31.135368   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.147157   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.158091   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.169385   29546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 21:17:31.181217   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.192527   29546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.210306   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.221334   29546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 21:17:31.233227   29546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 21:17:31.233302   29546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 21:17:31.249920   29546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 21:17:31.260187   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:17:31.382201   29546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 21:17:31.482956   29546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 21:17:31.483037   29546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 21:17:31.488476   29546 start.go:563] Will wait 60s for crictl version
	I0920 21:17:31.488554   29546 ssh_runner.go:195] Run: which crictl
	I0920 21:17:31.492584   29546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 21:17:31.538709   29546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 21:17:31.538785   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:17:31.572182   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:17:31.603894   29546 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 21:17:31.605343   29546 out.go:177]   - env NO_PROXY=192.168.39.240
	I0920 21:17:31.606667   29546 out.go:177]   - env NO_PROXY=192.168.39.240,192.168.39.63
	I0920 21:17:31.607748   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetIP
	I0920 21:17:31.610281   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:31.610645   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:31.610679   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:31.610868   29546 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 21:17:31.615319   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:17:31.628360   29546 mustload.go:65] Loading cluster: ha-797954
	I0920 21:17:31.628620   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:17:31.628963   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:17:31.629005   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:17:31.643946   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
	I0920 21:17:31.644427   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:17:31.644862   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:17:31.644878   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:17:31.645201   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:17:31.645370   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:17:31.646870   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:17:31.647201   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:17:31.647238   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:17:31.662472   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0920 21:17:31.662941   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:17:31.663429   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:17:31.663454   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:17:31.663746   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:17:31.663910   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:17:31.664076   29546 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954 for IP: 192.168.39.108
	I0920 21:17:31.664086   29546 certs.go:194] generating shared ca certs ...
	I0920 21:17:31.664103   29546 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:17:31.664243   29546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 21:17:31.664295   29546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 21:17:31.664308   29546 certs.go:256] generating profile certs ...
	I0920 21:17:31.664400   29546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key
	I0920 21:17:31.664432   29546 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9a277b8a
	I0920 21:17:31.664451   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9a277b8a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.63 192.168.39.108 192.168.39.254]
	I0920 21:17:31.999984   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9a277b8a ...
	I0920 21:17:32.000017   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9a277b8a: {Name:mkb265b9a0e489fa9c79757fad17ff69e435c7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:17:32.000213   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9a277b8a ...
	I0920 21:17:32.000228   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9a277b8a: {Name:mk577de20bc228548ac6019ec38c5032817178c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:17:32.000340   29546 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9a277b8a -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt
	I0920 21:17:32.000512   29546 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9a277b8a -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key
	I0920 21:17:32.000680   29546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key
	I0920 21:17:32.000701   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 21:17:32.000718   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 21:17:32.000728   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 21:17:32.000741   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 21:17:32.000756   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 21:17:32.000773   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 21:17:32.000790   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 21:17:32.019102   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 21:17:32.019214   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 21:17:32.019253   29546 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 21:17:32.019263   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 21:17:32.019285   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 21:17:32.019307   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 21:17:32.019328   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 21:17:32.019367   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:17:32.019392   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /usr/share/ca-certificates/167852.pem
	I0920 21:17:32.019407   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:17:32.019422   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem -> /usr/share/ca-certificates/16785.pem
	I0920 21:17:32.019452   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:17:32.022485   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:17:32.022895   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:17:32.022924   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:17:32.023142   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:17:32.023341   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:17:32.023495   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:17:32.023639   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:17:32.099301   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 21:17:32.104861   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 21:17:32.119922   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 21:17:32.124156   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0920 21:17:32.134999   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 21:17:32.139238   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 21:17:32.149597   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 21:17:32.154030   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0920 21:17:32.167439   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 21:17:32.172928   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 21:17:32.184258   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 21:17:32.189404   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0920 21:17:32.201402   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 21:17:32.227964   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 21:17:32.252476   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 21:17:32.280434   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 21:17:32.305098   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0920 21:17:32.330545   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 21:17:32.355250   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 21:17:32.380481   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 21:17:32.409052   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 21:17:32.434675   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 21:17:32.460671   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 21:17:32.485722   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 21:17:32.503366   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0920 21:17:32.520848   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 21:17:32.537950   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0920 21:17:32.555973   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 21:17:32.574604   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0920 21:17:32.592470   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 21:17:32.612075   29546 ssh_runner.go:195] Run: openssl version
	I0920 21:17:32.618522   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 21:17:32.629763   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:17:32.634570   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:17:32.634649   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:17:32.640735   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 21:17:32.651846   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 21:17:32.663634   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 21:17:32.668436   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:17:32.668505   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 21:17:32.674910   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 21:17:32.686102   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 21:17:32.697672   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 21:17:32.702132   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:17:32.702197   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 21:17:32.707913   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 21:17:32.719066   29546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:17:32.723392   29546 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 21:17:32.723441   29546 kubeadm.go:934] updating node {m03 192.168.39.108 8443 v1.31.1 crio true true} ...
	I0920 21:17:32.723510   29546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-797954-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 21:17:32.723543   29546 kube-vip.go:115] generating kube-vip config ...
	I0920 21:17:32.723582   29546 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 21:17:32.740751   29546 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 21:17:32.740840   29546 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 21:17:32.740906   29546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 21:17:32.750919   29546 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 21:17:32.750992   29546 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 21:17:32.760956   29546 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 21:17:32.760966   29546 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 21:17:32.760984   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 21:17:32.760984   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 21:17:32.761008   29546 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 21:17:32.761029   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 21:17:32.761047   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 21:17:32.761047   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:17:32.770258   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 21:17:32.770302   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 21:17:32.770469   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 21:17:32.770497   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 21:17:32.785590   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 21:17:32.785720   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 21:17:32.887818   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 21:17:32.887866   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 21:17:33.724236   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 21:17:33.735002   29546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 21:17:33.752570   29546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 21:17:33.770318   29546 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 21:17:33.790184   29546 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 21:17:33.794843   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:17:33.808084   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:17:33.931534   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:17:33.948935   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:17:33.949425   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:17:33.949481   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:17:33.965177   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I0920 21:17:33.966119   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:17:33.966693   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:17:33.966718   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:17:33.967135   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:17:33.967319   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:17:33.967487   29546 start.go:317] joinCluster: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:17:33.967629   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 21:17:33.967652   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:17:33.970529   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:17:33.970999   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:17:33.971028   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:17:33.971157   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:17:33.971336   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:17:33.971484   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:17:33.971611   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:17:34.158062   29546 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:17:34.158116   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token au5bfe.w4m98std58j4wmo1 --discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-797954-m03 --control-plane --apiserver-advertise-address=192.168.39.108 --apiserver-bind-port=8443"
	I0920 21:17:56.483713   29546 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token au5bfe.w4m98std58j4wmo1 --discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-797954-m03 --control-plane --apiserver-advertise-address=192.168.39.108 --apiserver-bind-port=8443": (22.325561758s)
	I0920 21:17:56.483748   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 21:17:57.184683   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-797954-m03 minikube.k8s.io/updated_at=2024_09_20T21_17_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=ha-797954 minikube.k8s.io/primary=false
	I0920 21:17:57.333060   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-797954-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 21:17:57.459382   29546 start.go:319] duration metric: took 23.491890367s to joinCluster
	I0920 21:17:57.459461   29546 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:17:57.459752   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:17:57.461273   29546 out.go:177] * Verifying Kubernetes components...
	I0920 21:17:57.463650   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:17:57.782773   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:17:57.853744   29546 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:17:57.854084   29546 kapi.go:59] client config for ha-797954: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key", CAFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 21:17:57.854172   29546 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.240:8443
	I0920 21:17:57.854425   29546 node_ready.go:35] waiting up to 6m0s for node "ha-797954-m03" to be "Ready" ...
	I0920 21:17:57.854502   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:57.854511   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:57.854521   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:57.854526   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:57.858388   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:58.354955   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:58.355019   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:58.355031   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:58.355038   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:58.358738   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:58.855362   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:58.855392   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:58.855404   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:58.855410   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:58.859676   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:59.355007   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:59.355039   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:59.355047   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:59.355051   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:59.358804   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:59.855573   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:59.855597   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:59.855608   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:59.855615   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:59.859651   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:59.860298   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:00.355648   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:00.355670   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:00.355679   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:00.355683   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:00.359549   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:00.854939   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:00.854979   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:00.854989   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:00.854994   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:00.858910   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:01.354629   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:01.354654   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:01.354665   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:01.354671   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:01.358103   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:01.855123   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:01.855148   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:01.855160   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:01.855167   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:01.858757   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:02.355433   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:02.355460   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:02.355470   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:02.355478   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:02.358859   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:02.359609   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:02.854611   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:02.854637   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:02.854646   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:02.854651   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:02.858023   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:03.355696   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:03.355726   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:03.355738   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:03.355743   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:03.359438   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:03.854679   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:03.854705   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:03.854714   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:03.854718   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:03.858836   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:04.355520   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:04.355553   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:04.355565   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:04.355572   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:04.359190   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:04.359707   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:04.855114   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:04.855137   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:04.855146   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:04.855151   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:04.859053   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:05.355120   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:05.355148   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:05.355160   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:05.355166   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:05.359138   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:05.855384   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:05.855416   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:05.855429   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:05.855436   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:05.860675   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:18:06.354757   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:06.354801   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:06.354812   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:06.354818   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:06.358624   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:06.854603   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:06.854626   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:06.854636   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:06.854654   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:06.868553   29546 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0920 21:18:06.869190   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:07.354649   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:07.354676   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:07.354687   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:07.354700   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:07.359137   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:07.855272   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:07.855298   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:07.855322   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:07.855332   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:07.859818   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:08.354945   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:08.354975   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:08.354983   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:08.354987   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:08.358743   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:08.854654   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:08.854694   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:08.854705   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:08.854726   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:08.858133   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:09.354680   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:09.354702   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:09.354708   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:09.354711   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:09.358286   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:09.358822   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:09.855309   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:09.855334   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:09.855343   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:09.855347   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:09.859448   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:10.355394   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:10.355413   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:10.355423   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:10.355429   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:10.360033   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:10.855581   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:10.855604   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:10.855615   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:10.855622   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:10.859498   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:11.355312   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:11.355337   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.355345   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.355349   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.358651   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:11.359127   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:11.855144   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:11.855174   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.855184   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.855209   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.859059   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:11.859886   29546 node_ready.go:49] node "ha-797954-m03" has status "Ready":"True"
	I0920 21:18:11.859907   29546 node_ready.go:38] duration metric: took 14.005464709s for node "ha-797954-m03" to be "Ready" ...
	I0920 21:18:11.859917   29546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 21:18:11.859993   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:11.860006   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.860016   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.860021   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.868162   29546 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 21:18:11.876454   29546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.876533   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bkcf8
	I0920 21:18:11.876538   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.876545   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.876552   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.889831   29546 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0920 21:18:11.890530   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:11.890547   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.890556   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.890559   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.899152   29546 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 21:18:11.899662   29546 pod_ready.go:93] pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:11.899680   29546 pod_ready.go:82] duration metric: took 23.198308ms for pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.899690   29546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.899746   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-djhfr
	I0920 21:18:11.899753   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.899760   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.899765   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.905423   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:18:11.906584   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:11.906599   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.906607   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.906612   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.909991   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:11.910558   29546 pod_ready.go:93] pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:11.910576   29546 pod_ready.go:82] duration metric: took 10.880105ms for pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.910585   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.910644   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954
	I0920 21:18:11.910653   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.910660   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.910665   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.917040   29546 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 21:18:11.917709   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:11.917723   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.917731   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.917734   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.926063   29546 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 21:18:11.926748   29546 pod_ready.go:93] pod "etcd-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:11.926766   29546 pod_ready.go:82] duration metric: took 16.17481ms for pod "etcd-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.926778   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.926852   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954-m02
	I0920 21:18:11.926862   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.926872   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.926879   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.932632   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:18:11.933455   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:11.933475   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.933486   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.933492   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.936415   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:18:11.936956   29546 pod_ready.go:93] pod "etcd-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:11.936977   29546 pod_ready.go:82] duration metric: took 10.190753ms for pod "etcd-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.936989   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.055265   29546 request.go:632] Waited for 118.210581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954-m03
	I0920 21:18:12.055358   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954-m03
	I0920 21:18:12.055367   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.055378   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.055387   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.060304   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:12.255635   29546 request.go:632] Waited for 194.221335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:12.255716   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:12.255723   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.255734   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.255739   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.259841   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:12.260392   29546 pod_ready.go:93] pod "etcd-ha-797954-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:12.260414   29546 pod_ready.go:82] duration metric: took 323.41646ms for pod "etcd-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.260439   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.455886   29546 request.go:632] Waited for 195.375581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954
	I0920 21:18:12.455950   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954
	I0920 21:18:12.455955   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.455963   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.455971   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.459141   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:12.656082   29546 request.go:632] Waited for 196.149744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:12.656147   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:12.656164   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.656172   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.656176   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.659742   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:12.660232   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:12.660250   29546 pod_ready.go:82] duration metric: took 399.803876ms for pod "kube-apiserver-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.660259   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.855251   29546 request.go:632] Waited for 194.932878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m02
	I0920 21:18:12.855338   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m02
	I0920 21:18:12.855348   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.855357   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.855369   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.861682   29546 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 21:18:13.055384   29546 request.go:632] Waited for 192.289197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:13.055437   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:13.055442   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.055449   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.055454   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.058687   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.059244   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:13.059263   29546 pod_ready.go:82] duration metric: took 398.997761ms for pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.059272   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.256048   29546 request.go:632] Waited for 196.689932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m03
	I0920 21:18:13.256118   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m03
	I0920 21:18:13.256126   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.256135   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.256141   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.259220   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.455932   29546 request.go:632] Waited for 195.902068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:13.456005   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:13.456028   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.456039   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.456043   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.459696   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.460211   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:13.460227   29546 pod_ready.go:82] duration metric: took 400.948779ms for pod "kube-apiserver-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.460237   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.655197   29546 request.go:632] Waited for 194.89988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954
	I0920 21:18:13.655267   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954
	I0920 21:18:13.655282   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.655309   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.655321   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.658570   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.855939   29546 request.go:632] Waited for 196.380436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:13.855992   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:13.855997   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.856004   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.856006   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.859600   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.860576   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:13.860591   29546 pod_ready.go:82] duration metric: took 400.349149ms for pod "kube-controller-manager-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.860600   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.055168   29546 request.go:632] Waited for 194.505202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m02
	I0920 21:18:14.055234   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m02
	I0920 21:18:14.055241   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.055251   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.055260   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.058275   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:18:14.255766   29546 request.go:632] Waited for 196.386628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:14.255858   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:14.255870   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.255879   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.255887   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.259522   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:14.260240   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:14.260258   29546 pod_ready.go:82] duration metric: took 399.650728ms for pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.260270   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.455432   29546 request.go:632] Waited for 195.087635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m03
	I0920 21:18:14.455501   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m03
	I0920 21:18:14.455506   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.455516   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.455526   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.458903   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:14.655253   29546 request.go:632] Waited for 195.299076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:14.655305   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:14.655310   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.655317   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.655324   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.658799   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:14.659327   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:14.659347   29546 pod_ready.go:82] duration metric: took 399.070103ms for pod "kube-controller-manager-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.659357   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2pjxq" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.855420   29546 request.go:632] Waited for 195.985286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pjxq
	I0920 21:18:14.855474   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pjxq
	I0920 21:18:14.855480   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.855491   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.855500   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.859401   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.055482   29546 request.go:632] Waited for 195.388569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:15.055554   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:15.055561   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.055571   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.055577   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.059246   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.059980   29546 pod_ready.go:93] pod "kube-proxy-2pjxq" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:15.060007   29546 pod_ready.go:82] duration metric: took 400.64405ms for pod "kube-proxy-2pjxq" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.060015   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jskx5" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.256131   29546 request.go:632] Waited for 196.057373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jskx5
	I0920 21:18:15.256206   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jskx5
	I0920 21:18:15.256211   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.256217   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.256220   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.260016   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.456032   29546 request.go:632] Waited for 195.376218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:15.456111   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:15.456119   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.456127   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.456134   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.459253   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.459960   29546 pod_ready.go:93] pod "kube-proxy-jskx5" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:15.459978   29546 pod_ready.go:82] duration metric: took 399.957197ms for pod "kube-proxy-jskx5" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.459988   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qx9ph" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.656143   29546 request.go:632] Waited for 196.090186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qx9ph
	I0920 21:18:15.656203   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qx9ph
	I0920 21:18:15.656215   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.656234   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.656239   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.660293   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:15.855333   29546 request.go:632] Waited for 194.281869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:15.855386   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:15.855391   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.855398   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.855404   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.859011   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.859698   29546 pod_ready.go:93] pod "kube-proxy-qx9ph" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:15.859721   29546 pod_ready.go:82] duration metric: took 399.724688ms for pod "kube-proxy-qx9ph" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.859733   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.055835   29546 request.go:632] Waited for 196.022812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954
	I0920 21:18:16.055896   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954
	I0920 21:18:16.055901   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.055909   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.055916   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.059373   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:16.256147   29546 request.go:632] Waited for 196.031182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:16.256218   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:16.256225   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.256233   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.256236   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.259485   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:16.260205   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:16.260224   29546 pod_ready.go:82] duration metric: took 400.483331ms for pod "kube-scheduler-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.260233   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.456145   29546 request.go:632] Waited for 195.844042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m02
	I0920 21:18:16.456197   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m02
	I0920 21:18:16.456203   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.456211   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.456216   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.459396   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:16.655597   29546 request.go:632] Waited for 195.489863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:16.655679   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:16.655686   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.655697   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.655702   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.659872   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:16.660490   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:16.660506   29546 pod_ready.go:82] duration metric: took 400.26753ms for pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.660515   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.856056   29546 request.go:632] Waited for 195.455765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m03
	I0920 21:18:16.856110   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m03
	I0920 21:18:16.856115   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.856123   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.856132   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.860019   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:17.056049   29546 request.go:632] Waited for 195.339366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:17.056101   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:17.056106   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.056113   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.056117   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.060195   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:17.060851   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:17.060872   29546 pod_ready.go:82] duration metric: took 400.349558ms for pod "kube-scheduler-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:17.060887   29546 pod_ready.go:39] duration metric: took 5.200957368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 21:18:17.060904   29546 api_server.go:52] waiting for apiserver process to appear ...
	I0920 21:18:17.060962   29546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 21:18:17.079611   29546 api_server.go:72] duration metric: took 19.620113639s to wait for apiserver process to appear ...
	I0920 21:18:17.079636   29546 api_server.go:88] waiting for apiserver healthz status ...
	I0920 21:18:17.079653   29546 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0920 21:18:17.084062   29546 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0920 21:18:17.084125   29546 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0920 21:18:17.084133   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.084141   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.084149   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.085207   29546 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0920 21:18:17.085276   29546 api_server.go:141] control plane version: v1.31.1
	I0920 21:18:17.085291   29546 api_server.go:131] duration metric: took 5.649897ms to wait for apiserver health ...
	I0920 21:18:17.085298   29546 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 21:18:17.256039   29546 request.go:632] Waited for 170.668533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:17.256105   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:17.256110   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.256118   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.256123   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.262225   29546 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 21:18:17.269500   29546 system_pods.go:59] 24 kube-system pods found
	I0920 21:18:17.269529   29546 system_pods.go:61] "coredns-7c65d6cfc9-bkcf8" [5d5aca96-de3a-4154-b195-b69aa45e2cfc] Running
	I0920 21:18:17.269535   29546 system_pods.go:61] "coredns-7c65d6cfc9-djhfr" [a8ca35db-3f4c-4e4c-a9e4-e12abecff539] Running
	I0920 21:18:17.269539   29546 system_pods.go:61] "etcd-ha-797954" [ce647dfd-9ff8-483e-ac64-e6cbed777dfa] Running
	I0920 21:18:17.269544   29546 system_pods.go:61] "etcd-ha-797954-m02" [d8c6a164-a169-482f-a05f-0e723b947424] Running
	I0920 21:18:17.269549   29546 system_pods.go:61] "etcd-ha-797954-m03" [8cc13a8d-b936-4a85-95a8-7cc0d0b63df0] Running
	I0920 21:18:17.269554   29546 system_pods.go:61] "kindnet-6lk2f" [ddd6c393-68ce-4fff-9f0d-6e0147e03926] Running
	I0920 21:18:17.269561   29546 system_pods.go:61] "kindnet-qh5rt" [2036234c-15c7-43da-ac52-cdac77f1a721] Running
	I0920 21:18:17.269569   29546 system_pods.go:61] "kindnet-tm8ml" [015073b8-e223-49d7-80cb-03d0ced5e0b5] Running
	I0920 21:18:17.269577   29546 system_pods.go:61] "kube-apiserver-ha-797954" [91b4d438-70c1-4335-a0eb-9d48bd470b37] Running
	I0920 21:18:17.269584   29546 system_pods.go:61] "kube-apiserver-ha-797954-m02" [d0634b88-e668-4be0-b175-543f5f496efa] Running
	I0920 21:18:17.269592   29546 system_pods.go:61] "kube-apiserver-ha-797954-m03" [567aeb50-886c-47d8-b285-d93f6ea8947f] Running
	I0920 21:18:17.269597   29546 system_pods.go:61] "kube-controller-manager-ha-797954" [7bcf4459-7067-4f10-a79d-a4cd0017fffb] Running
	I0920 21:18:17.269606   29546 system_pods.go:61] "kube-controller-manager-ha-797954-m02" [4366a95b-15ec-42c7-916d-14eb0dba314d] Running
	I0920 21:18:17.269609   29546 system_pods.go:61] "kube-controller-manager-ha-797954-m03" [b1366ddb-c6da-4dec-9da7-5544133e7d48] Running
	I0920 21:18:17.269615   29546 system_pods.go:61] "kube-proxy-2pjxq" [bbd40eaa-4d03-491a-96c6-ef4cedc6401a] Running
	I0920 21:18:17.269618   29546 system_pods.go:61] "kube-proxy-jskx5" [438fa076-db74-4876-b355-a143535b97fd] Running
	I0920 21:18:17.269624   29546 system_pods.go:61] "kube-proxy-qx9ph" [b3d20fb2-8eff-4921-add4-ee36b8fba334] Running
	I0920 21:18:17.269628   29546 system_pods.go:61] "kube-scheduler-ha-797954" [a55760dc-c4e4-4cec-b8d1-04512bd9ad33] Running
	I0920 21:18:17.269634   29546 system_pods.go:61] "kube-scheduler-ha-797954-m02" [4a53607e-2d10-4fcb-8697-bf042189871d] Running
	I0920 21:18:17.269640   29546 system_pods.go:61] "kube-scheduler-ha-797954-m03" [5586a8f2-f55f-4d05-a936-3f2a5756be15] Running
	I0920 21:18:17.269643   29546 system_pods.go:61] "kube-vip-ha-797954" [ad002c23-a891-4bf6-bbb7-bba95f43b1d0] Running
	I0920 21:18:17.269650   29546 system_pods.go:61] "kube-vip-ha-797954-m02" [09b5b6c2-96e7-43f1-a53e-21513fc048ed] Running
	I0920 21:18:17.269655   29546 system_pods.go:61] "kube-vip-ha-797954-m03" [b2310f23-42cf-4e3d-b6d3-e8b27a009742] Running
	I0920 21:18:17.269663   29546 system_pods.go:61] "storage-provisioner" [05385037-e9ea-4afb-a256-3bfae42e00f5] Running
	I0920 21:18:17.269674   29546 system_pods.go:74] duration metric: took 184.367541ms to wait for pod list to return data ...
	I0920 21:18:17.269686   29546 default_sa.go:34] waiting for default service account to be created ...
	I0920 21:18:17.455854   29546 request.go:632] Waited for 186.091197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0920 21:18:17.455909   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0920 21:18:17.455915   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.455925   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.455937   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.460043   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:17.460164   29546 default_sa.go:45] found service account: "default"
	I0920 21:18:17.460183   29546 default_sa.go:55] duration metric: took 190.487846ms for default service account to be created ...
	I0920 21:18:17.460191   29546 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 21:18:17.655637   29546 request.go:632] Waited for 195.376962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:17.655710   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:17.655718   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.655729   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.655734   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.661370   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:18:17.667683   29546 system_pods.go:86] 24 kube-system pods found
	I0920 21:18:17.667707   29546 system_pods.go:89] "coredns-7c65d6cfc9-bkcf8" [5d5aca96-de3a-4154-b195-b69aa45e2cfc] Running
	I0920 21:18:17.667713   29546 system_pods.go:89] "coredns-7c65d6cfc9-djhfr" [a8ca35db-3f4c-4e4c-a9e4-e12abecff539] Running
	I0920 21:18:17.667717   29546 system_pods.go:89] "etcd-ha-797954" [ce647dfd-9ff8-483e-ac64-e6cbed777dfa] Running
	I0920 21:18:17.667721   29546 system_pods.go:89] "etcd-ha-797954-m02" [d8c6a164-a169-482f-a05f-0e723b947424] Running
	I0920 21:18:17.667724   29546 system_pods.go:89] "etcd-ha-797954-m03" [8cc13a8d-b936-4a85-95a8-7cc0d0b63df0] Running
	I0920 21:18:17.667728   29546 system_pods.go:89] "kindnet-6lk2f" [ddd6c393-68ce-4fff-9f0d-6e0147e03926] Running
	I0920 21:18:17.667733   29546 system_pods.go:89] "kindnet-qh5rt" [2036234c-15c7-43da-ac52-cdac77f1a721] Running
	I0920 21:18:17.667738   29546 system_pods.go:89] "kindnet-tm8ml" [015073b8-e223-49d7-80cb-03d0ced5e0b5] Running
	I0920 21:18:17.667744   29546 system_pods.go:89] "kube-apiserver-ha-797954" [91b4d438-70c1-4335-a0eb-9d48bd470b37] Running
	I0920 21:18:17.667753   29546 system_pods.go:89] "kube-apiserver-ha-797954-m02" [d0634b88-e668-4be0-b175-543f5f496efa] Running
	I0920 21:18:17.667762   29546 system_pods.go:89] "kube-apiserver-ha-797954-m03" [567aeb50-886c-47d8-b285-d93f6ea8947f] Running
	I0920 21:18:17.667770   29546 system_pods.go:89] "kube-controller-manager-ha-797954" [7bcf4459-7067-4f10-a79d-a4cd0017fffb] Running
	I0920 21:18:17.667776   29546 system_pods.go:89] "kube-controller-manager-ha-797954-m02" [4366a95b-15ec-42c7-916d-14eb0dba314d] Running
	I0920 21:18:17.667784   29546 system_pods.go:89] "kube-controller-manager-ha-797954-m03" [b1366ddb-c6da-4dec-9da7-5544133e7d48] Running
	I0920 21:18:17.667788   29546 system_pods.go:89] "kube-proxy-2pjxq" [bbd40eaa-4d03-491a-96c6-ef4cedc6401a] Running
	I0920 21:18:17.667792   29546 system_pods.go:89] "kube-proxy-jskx5" [438fa076-db74-4876-b355-a143535b97fd] Running
	I0920 21:18:17.667796   29546 system_pods.go:89] "kube-proxy-qx9ph" [b3d20fb2-8eff-4921-add4-ee36b8fba334] Running
	I0920 21:18:17.667802   29546 system_pods.go:89] "kube-scheduler-ha-797954" [a55760dc-c4e4-4cec-b8d1-04512bd9ad33] Running
	I0920 21:18:17.667806   29546 system_pods.go:89] "kube-scheduler-ha-797954-m02" [4a53607e-2d10-4fcb-8697-bf042189871d] Running
	I0920 21:18:17.667812   29546 system_pods.go:89] "kube-scheduler-ha-797954-m03" [5586a8f2-f55f-4d05-a936-3f2a5756be15] Running
	I0920 21:18:17.667816   29546 system_pods.go:89] "kube-vip-ha-797954" [ad002c23-a891-4bf6-bbb7-bba95f43b1d0] Running
	I0920 21:18:17.667820   29546 system_pods.go:89] "kube-vip-ha-797954-m02" [09b5b6c2-96e7-43f1-a53e-21513fc048ed] Running
	I0920 21:18:17.667823   29546 system_pods.go:89] "kube-vip-ha-797954-m03" [b2310f23-42cf-4e3d-b6d3-e8b27a009742] Running
	I0920 21:18:17.667829   29546 system_pods.go:89] "storage-provisioner" [05385037-e9ea-4afb-a256-3bfae42e00f5] Running
	I0920 21:18:17.667835   29546 system_pods.go:126] duration metric: took 207.638426ms to wait for k8s-apps to be running ...
	I0920 21:18:17.667846   29546 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 21:18:17.667895   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:18:17.684265   29546 system_svc.go:56] duration metric: took 16.408361ms WaitForService to wait for kubelet
	I0920 21:18:17.684294   29546 kubeadm.go:582] duration metric: took 20.224798232s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 21:18:17.684316   29546 node_conditions.go:102] verifying NodePressure condition ...
	I0920 21:18:17.855695   29546 request.go:632] Waited for 171.306025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
	I0920 21:18:17.855757   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0920 21:18:17.855762   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.855769   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.855773   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.860127   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:17.861061   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:18:17.861078   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:18:17.861097   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:18:17.861107   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:18:17.861116   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:18:17.861121   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:18:17.861127   29546 node_conditions.go:105] duration metric: took 176.804918ms to run NodePressure ...
	I0920 21:18:17.861140   29546 start.go:241] waiting for startup goroutines ...
	I0920 21:18:17.861159   29546 start.go:255] writing updated cluster config ...
	I0920 21:18:17.861474   29546 ssh_runner.go:195] Run: rm -f paused
	I0920 21:18:17.914152   29546 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 21:18:17.916981   29546 out.go:177] * Done! kubectl is now configured to use "ha-797954" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.776050079Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867315776027144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d90d56c-196d-491c-af2d-4c412baf238e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.776556550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6ae4c66-1a5a-42e7-9191-ca78f52cd89f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.776625918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6ae4c66-1a5a-42e7-9191-ca78f52cd89f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.776858339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867101601325723,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958883518913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958850406855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79,PodSandboxId:0ac27cef2729c0df7d987404a63f883a30be782372032aa31fcbe8a3dfa300a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726866958825667527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268669
46745728301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726866946560910908,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb,PodSandboxId:2ced8553e30128303b22fe9ff0bf8119fefc9729ff510070c138297b2edfcbb3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726866936873742655,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a54da6e69705211cd5f4d070f63bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726866934980260113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360,PodSandboxId:2c27a0b835060dbb1899a4af4c05a4fd816335f9b3f9fac0a5d7248e6ba3cbab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726866934952064340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726866934988370531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674,PodSandboxId:db69670c3a315720cd48ed17cf71e606baf7c5d3d62a1ff75d4dd1559ab9da38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726866934917412851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6ae4c66-1a5a-42e7-9191-ca78f52cd89f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.819441046Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fd322b3f-90e3-4ecc-9ccc-1393d494cc0b name=/runtime.v1.RuntimeService/Version
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.819583075Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fd322b3f-90e3-4ecc-9ccc-1393d494cc0b name=/runtime.v1.RuntimeService/Version
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.821450589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e7826f46-884f-482b-806b-0f7a26da7b0a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.822022406Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867315821993443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e7826f46-884f-482b-806b-0f7a26da7b0a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.822627585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbd4b426-1dd1-4b48-901f-935cadf9ab22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.822709119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbd4b426-1dd1-4b48-901f-935cadf9ab22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.822981641Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867101601325723,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958883518913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958850406855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79,PodSandboxId:0ac27cef2729c0df7d987404a63f883a30be782372032aa31fcbe8a3dfa300a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726866958825667527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268669
46745728301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726866946560910908,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb,PodSandboxId:2ced8553e30128303b22fe9ff0bf8119fefc9729ff510070c138297b2edfcbb3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726866936873742655,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a54da6e69705211cd5f4d070f63bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726866934980260113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360,PodSandboxId:2c27a0b835060dbb1899a4af4c05a4fd816335f9b3f9fac0a5d7248e6ba3cbab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726866934952064340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726866934988370531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674,PodSandboxId:db69670c3a315720cd48ed17cf71e606baf7c5d3d62a1ff75d4dd1559ab9da38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726866934917412851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbd4b426-1dd1-4b48-901f-935cadf9ab22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.862936567Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7ac5c561-be77-4657-81e5-72c6af9b54e9 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.863018328Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7ac5c561-be77-4657-81e5-72c6af9b54e9 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.864218158Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9a461056-eb5d-4ba0-9c6f-e8049f117296 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.864693960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867315864669939,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9a461056-eb5d-4ba0-9c6f-e8049f117296 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.865164460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=55f9cb11-6b9c-41a9-8a8f-4e36f6ed1205 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.865219534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=55f9cb11-6b9c-41a9-8a8f-4e36f6ed1205 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.865448595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867101601325723,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958883518913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958850406855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79,PodSandboxId:0ac27cef2729c0df7d987404a63f883a30be782372032aa31fcbe8a3dfa300a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726866958825667527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268669
46745728301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726866946560910908,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb,PodSandboxId:2ced8553e30128303b22fe9ff0bf8119fefc9729ff510070c138297b2edfcbb3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726866936873742655,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a54da6e69705211cd5f4d070f63bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726866934980260113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360,PodSandboxId:2c27a0b835060dbb1899a4af4c05a4fd816335f9b3f9fac0a5d7248e6ba3cbab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726866934952064340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726866934988370531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674,PodSandboxId:db69670c3a315720cd48ed17cf71e606baf7c5d3d62a1ff75d4dd1559ab9da38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726866934917412851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=55f9cb11-6b9c-41a9-8a8f-4e36f6ed1205 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.905220142Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff6b6a86-4535-4990-91fa-6f49af6f6b55 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.905308428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff6b6a86-4535-4990-91fa-6f49af6f6b55 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.906804811Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9d09866-cd98-494b-8344-ea3a978e97f1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.907212024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867315907191595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9d09866-cd98-494b-8344-ea3a978e97f1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.907846326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8dc216a8-3224-4186-8c5e-af6485a409df name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.907903185Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8dc216a8-3224-4186-8c5e-af6485a409df name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:21:55 ha-797954 crio[666]: time="2024-09-20 21:21:55.908570975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867101601325723,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958883518913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958850406855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79,PodSandboxId:0ac27cef2729c0df7d987404a63f883a30be782372032aa31fcbe8a3dfa300a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726866958825667527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268669
46745728301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726866946560910908,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb,PodSandboxId:2ced8553e30128303b22fe9ff0bf8119fefc9729ff510070c138297b2edfcbb3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726866936873742655,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a54da6e69705211cd5f4d070f63bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726866934980260113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360,PodSandboxId:2c27a0b835060dbb1899a4af4c05a4fd816335f9b3f9fac0a5d7248e6ba3cbab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726866934952064340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726866934988370531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674,PodSandboxId:db69670c3a315720cd48ed17cf71e606baf7c5d3d62a1ff75d4dd1559ab9da38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726866934917412851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8dc216a8-3224-4186-8c5e-af6485a409df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	468d1e1f7ac8a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   d6db457e8bbe8       busybox-7dff88458-44q29
	e3c9e6da67286       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   48e2a20451c46       coredns-7c65d6cfc9-djhfr
	0e272e92406da       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   891e1332d4c2d       coredns-7c65d6cfc9-bkcf8
	0fdd2277dc992       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   0ac27cef2729c       storage-provisioner
	d34d703bb77f7       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   794a246d61681       kindnet-6lk2f
	6367518948d4f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   457a98ab41d5f       kube-proxy-qx9ph
	371dadd621425       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   2ced8553e3012       kube-vip-ha-797954
	34e481bafb0d8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   0e7df03407c5d       kube-scheduler-ha-797954
	613c4c00096cd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   388cd6346f6a7       etcd-ha-797954
	3ec2d3bd57d8b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   2c27a0b835060       kube-controller-manager-ha-797954
	0d975d52ea292       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   db69670c3a315       kube-apiserver-ha-797954
	
	
	==> coredns [0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab] <==
	[INFO] 10.244.0.4:57168 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.029279393s
	[INFO] 10.244.1.2:56742 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000226618s
	[INFO] 10.244.1.2:46151 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000739523s
	[INFO] 10.244.2.2:36832 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000342649s
	[INFO] 10.244.0.4:48215 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142911s
	[INFO] 10.244.0.4:37051 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.021353644s
	[INFO] 10.244.0.4:56814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00025486s
	[INFO] 10.244.0.4:60619 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004011357s
	[INFO] 10.244.0.4:45781 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000219299s
	[INFO] 10.244.1.2:58895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003827s
	[INFO] 10.244.1.2:52799 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000348157s
	[INFO] 10.244.1.2:46521 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146401s
	[INFO] 10.244.1.2:58888 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000279379s
	[INFO] 10.244.2.2:45400 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149773s
	[INFO] 10.244.2.2:42656 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000284841s
	[INFO] 10.244.0.4:39831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168662s
	[INFO] 10.244.1.2:33336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145322s
	[INFO] 10.244.1.2:44953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000245865s
	[INFO] 10.244.2.2:58673 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183778s
	[INFO] 10.244.2.2:55978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198561s
	[INFO] 10.244.1.2:49222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209152s
	[INFO] 10.244.1.2:57989 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000183561s
	[INFO] 10.244.1.2:57201 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00017075s
	[INFO] 10.244.2.2:46144 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184056s
	[INFO] 10.244.2.2:53312 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152142s
	
	
	==> coredns [e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50] <==
	[INFO] 10.244.0.4:37614 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011425s
	[INFO] 10.244.1.2:60075 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002047091s
	[INFO] 10.244.1.2:53774 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137405s
	[INFO] 10.244.1.2:43436 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001572851s
	[INFO] 10.244.1.2:50084 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173888s
	[INFO] 10.244.2.2:46588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000281143s
	[INFO] 10.244.2.2:49861 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002195678s
	[INFO] 10.244.2.2:43162 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194152s
	[INFO] 10.244.2.2:38062 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149126s
	[INFO] 10.244.2.2:46161 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001726482s
	[INFO] 10.244.2.2:38358 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105606s
	[INFO] 10.244.0.4:34794 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107864s
	[INFO] 10.244.0.4:36461 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073572s
	[INFO] 10.244.0.4:59058 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059888s
	[INFO] 10.244.1.2:33254 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111604s
	[INFO] 10.244.1.2:55753 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094078s
	[INFO] 10.244.2.2:46481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128659s
	[INFO] 10.244.2.2:36242 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192291s
	[INFO] 10.244.0.4:45612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142384s
	[INFO] 10.244.0.4:32977 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000240882s
	[INFO] 10.244.0.4:36320 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157906s
	[INFO] 10.244.0.4:57209 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121261s
	[INFO] 10.244.1.2:52740 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116141s
	[INFO] 10.244.2.2:45858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145799s
	[INFO] 10.244.2.2:32912 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000224434s
	
	
	==> describe nodes <==
	Name:               ha-797954
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T21_15_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:15:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:21:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:18:45 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:18:45 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:18:45 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:18:45 +0000   Fri, 20 Sep 2024 21:15:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    ha-797954
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ee8e79db7754067acc26d8f79541cf4
	  System UUID:                1ee8e79d-b775-4067-acc2-6d8f79541cf4
	  Boot ID:                    2a01d700-0c79-4cac-934b-ff091b3f743b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-44q29              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-7c65d6cfc9-bkcf8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 coredns-7c65d6cfc9-djhfr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 etcd-ha-797954                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m15s
	  kube-system                 kindnet-6lk2f                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m11s
	  kube-system                 kube-apiserver-ha-797954             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-controller-manager-ha-797954    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-qx9ph                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m11s
	  kube-system                 kube-scheduler-ha-797954             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m16s
	  kube-system                 kube-vip-ha-797954                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m8s   kube-proxy       
	  Normal  Starting                 6m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m15s  kubelet          Node ha-797954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s  kubelet          Node ha-797954 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s  kubelet          Node ha-797954 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m11s  node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal  NodeReady                5m58s  kubelet          Node ha-797954 status is now: NodeReady
	  Normal  RegisteredNode           5m14s  node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal  RegisteredNode           3m53s  node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	
	
	Name:               ha-797954-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_16_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:16:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:19:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 21:18:37 +0000   Fri, 20 Sep 2024 21:20:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 21:18:37 +0000   Fri, 20 Sep 2024 21:20:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 21:18:37 +0000   Fri, 20 Sep 2024 21:20:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 21:18:37 +0000   Fri, 20 Sep 2024 21:20:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    ha-797954-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b99cf888073c415c8a60d9eed0a33a77
	  System UUID:                b99cf888-073c-415c-8a60-d9eed0a33a77
	  Boot ID:                    9ba91b19-4463-4b79-aaf1-951e8cebdd88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5m4df                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-797954-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m19s
	  kube-system                 kindnet-qh5rt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m22s
	  kube-system                 kube-apiserver-ha-797954-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-controller-manager-ha-797954-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-proxy-2pjxq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-scheduler-ha-797954-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-vip-ha-797954-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m22s (x8 over 5m22s)  kubelet          Node ha-797954-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m22s (x8 over 5m22s)  kubelet          Node ha-797954-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m22s (x7 over 5m22s)  kubelet          Node ha-797954-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           3m53s                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  NodeNotReady             108s                   node-controller  Node ha-797954-m02 status is now: NodeNotReady
	
	
	Name:               ha-797954-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_17_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:17:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:21:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:18:23 +0000   Fri, 20 Sep 2024 21:17:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:18:23 +0000   Fri, 20 Sep 2024 21:17:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:18:23 +0000   Fri, 20 Sep 2024 21:17:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:18:23 +0000   Fri, 20 Sep 2024 21:18:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    ha-797954-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 17e0c265db3c48ee8ba8b007e24689e7
	  System UUID:                17e0c265-db3c-48ee-8ba8-b007e24689e7
	  Boot ID:                    0d991bd8-dcf2-4a85-9be2-891290e9f74b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s4w7f                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-797954-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m1s
	  kube-system                 kindnet-tm8ml                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m3s
	  kube-system                 kube-apiserver-ha-797954-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-controller-manager-ha-797954-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-proxy-jskx5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-scheduler-ha-797954-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m1s
	  kube-system                 kube-vip-ha-797954-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m3s (x8 over 4m4s)  kubelet          Node ha-797954-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x8 over 4m4s)  kubelet          Node ha-797954-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x7 over 4m4s)  kubelet          Node ha-797954-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	  Normal  RegisteredNode           3m53s                node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	
	
	Name:               ha-797954-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_18_56_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:18:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:21:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:19:26 +0000   Fri, 20 Sep 2024 21:18:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:19:26 +0000   Fri, 20 Sep 2024 21:18:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:19:26 +0000   Fri, 20 Sep 2024 21:18:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:19:26 +0000   Fri, 20 Sep 2024 21:19:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-797954-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 301819962cda48ca9fc543396f387646
	  System UUID:                30181996-2cda-48ca-9fc5-43396f387646
	  Boot ID:                    eb990af5-0bb6-4a23-a507-cf3ea2019e05
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5zw6l       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-ghvv9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m54s              kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m1s)  kubelet          Node ha-797954-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m1s)  kubelet          Node ha-797954-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m1s)  kubelet          Node ha-797954-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s              node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal  RegisteredNode           2m58s              node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal  RegisteredNode           2m56s              node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal  NodeReady                2m41s              kubelet          Node ha-797954-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 21:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051526] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039875] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.837230] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.569523] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.588596] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.856812] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.057896] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062395] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.182824] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.137335] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.286470] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.136041] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +4.054578] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.065217] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.071912] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.088568] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.488875] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.290079] kauditd_printk_skb: 38 callbacks suppressed
	[Sep20 21:16] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e] <==
	{"level":"warn","ts":"2024-09-20T21:21:56.198312Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.207613Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.212199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.231556Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.242287Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.252188Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.255728Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.259399Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.265036Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.272711Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.280105Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.283747Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.287011Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.292070Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.292401Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.295584Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.304726Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.312255Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.317541Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.321660Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.326456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.333422Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.341647Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.382386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:21:56.391730Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:21:56 up 6 min,  0 users,  load average: 0.10, 0.29, 0.16
	Linux ha-797954 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2] <==
	I0920 21:21:17.971564       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:21:27.976392       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:21:27.976561       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:21:27.976750       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:21:27.976821       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:21:27.976889       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:21:27.976908       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:21:27.976962       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:21:27.976980       1 main.go:299] handling current node
	I0920 21:21:37.972285       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:21:37.972329       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:21:37.972581       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:21:37.972607       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:21:37.972661       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:21:37.972683       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:21:37.972731       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:21:37.972737       1 main.go:299] handling current node
	I0920 21:21:47.971538       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:21:47.971596       1 main.go:299] handling current node
	I0920 21:21:47.971629       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:21:47.971638       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:21:47.971894       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:21:47.971922       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:21:47.971991       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:21:47.972013       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674] <==
	W0920 21:15:39.995416       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.240]
	I0920 21:15:39.997201       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 21:15:40.003324       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 21:15:40.250083       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 21:15:41.192926       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 21:15:41.217232       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 21:15:41.227871       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 21:15:45.151589       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0920 21:15:45.903425       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0920 21:18:22.997555       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52530: use of closed network connection
	E0920 21:18:23.183834       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52560: use of closed network connection
	E0920 21:18:23.372686       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52582: use of closed network connection
	E0920 21:18:23.592050       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52590: use of closed network connection
	E0920 21:18:23.773849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55264: use of closed network connection
	E0920 21:18:23.984978       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55286: use of closed network connection
	E0920 21:18:24.179836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55308: use of closed network connection
	E0920 21:18:24.361809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55330: use of closed network connection
	E0920 21:18:24.551154       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55360: use of closed network connection
	E0920 21:18:24.848743       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55398: use of closed network connection
	E0920 21:18:25.024085       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55414: use of closed network connection
	E0920 21:18:25.219294       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55430: use of closed network connection
	E0920 21:18:25.388448       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55444: use of closed network connection
	E0920 21:18:25.573028       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55472: use of closed network connection
	E0920 21:18:25.768039       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55484: use of closed network connection
	W0920 21:19:49.996169       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.108 192.168.39.240]
	
	
	==> kube-controller-manager [3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360] <==
	I0920 21:18:56.149866       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-797954-m04\" does not exist"
	I0920 21:18:56.187860       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-797954-m04" podCIDRs=["10.244.3.0/24"]
	I0920 21:18:56.187915       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:56.187977       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:56.481322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:56.872458       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:57.921726       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:58.169798       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:58.196622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:00.119543       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:00.120039       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-797954-m04"
	I0920 21:19:00.151133       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:06.208726       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:15.212653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:15.212832       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-797954-m04"
	I0920 21:19:15.244613       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:17.869384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:26.358050       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:20:08.200630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m02"
	I0920 21:20:08.200711       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-797954-m04"
	I0920 21:20:08.233657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m02"
	I0920 21:20:08.344363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.289751ms"
	I0920 21:20:08.345202       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="127.016µs"
	I0920 21:20:10.222320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m02"
	I0920 21:20:13.505001       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m02"
	
	
	==> kube-proxy [6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 21:15:47.138996       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 21:15:47.173137       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.240"]
	E0920 21:15:47.173284       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 21:15:47.239418       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 21:15:47.244602       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 21:15:47.244668       1 server_linux.go:169] "Using iptables Proxier"
	I0920 21:15:47.323829       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 21:15:47.324671       1 server.go:483] "Version info" version="v1.31.1"
	I0920 21:15:47.324724       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:15:47.327319       1 config.go:199] "Starting service config controller"
	I0920 21:15:47.329757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 21:15:47.330178       1 config.go:105] "Starting endpoint slice config controller"
	I0920 21:15:47.330205       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 21:15:47.331254       1 config.go:328] "Starting node config controller"
	I0920 21:15:47.331284       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 21:15:47.430176       1 shared_informer.go:320] Caches are synced for service config
	I0920 21:15:47.430270       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 21:15:47.431643       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463] <==
	I0920 21:18:18.810601       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5m4df" node="ha-797954-m02"
	E0920 21:18:18.861661       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-s4w7f\": pod busybox-7dff88458-s4w7f is already assigned to node \"ha-797954-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-s4w7f" node="ha-797954-m03"
	E0920 21:18:18.861738       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5110f648-42f3-47e2-bc5a-4b33d1468080(default/busybox-7dff88458-s4w7f) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-s4w7f"
	E0920 21:18:18.861759       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-s4w7f\": pod busybox-7dff88458-s4w7f is already assigned to node \"ha-797954-m03\"" pod="default/busybox-7dff88458-s4w7f"
	I0920 21:18:18.861781       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-s4w7f" node="ha-797954-m03"
	E0920 21:18:18.864713       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-44q29\": pod busybox-7dff88458-44q29 is already assigned to node \"ha-797954\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-44q29" node="ha-797954"
	E0920 21:18:18.864788       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 443788ae-8329-43bd-819c-a80e37254cf3(default/busybox-7dff88458-44q29) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-44q29"
	E0920 21:18:18.864810       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-44q29\": pod busybox-7dff88458-44q29 is already assigned to node \"ha-797954\"" pod="default/busybox-7dff88458-44q29"
	I0920 21:18:18.864831       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-44q29" node="ha-797954"
	E0920 21:18:56.226900       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5zw6l\": pod kindnet-5zw6l is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5zw6l" node="ha-797954-m04"
	E0920 21:18:56.227110       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a8b2d657-43d8-4e0d-8200-e11fb64ed35c(kube-system/kindnet-5zw6l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5zw6l"
	E0920 21:18:56.227211       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5zw6l\": pod kindnet-5zw6l is already assigned to node \"ha-797954-m04\"" pod="kube-system/kindnet-5zw6l"
	I0920 21:18:56.227382       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5zw6l" node="ha-797954-m04"
	E0920 21:18:56.228063       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvv9\": pod kube-proxy-ghvv9 is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ghvv9" node="ha-797954-m04"
	E0920 21:18:56.234373       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 586765d9-0991-47c1-aa4f-7c82cb8a2e76(kube-system/kube-proxy-ghvv9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ghvv9"
	E0920 21:18:56.234534       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvv9\": pod kube-proxy-ghvv9 is already assigned to node \"ha-797954-m04\"" pod="kube-system/kube-proxy-ghvv9"
	I0920 21:18:56.234608       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ghvv9" node="ha-797954-m04"
	E0920 21:18:56.279871       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4kct8\": pod kindnet-4kct8 is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4kct8" node="ha-797954-m04"
	E0920 21:18:56.280008       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4kct8\": pod kindnet-4kct8 is already assigned to node \"ha-797954-m04\"" pod="kube-system/kindnet-4kct8"
	E0920 21:18:56.280247       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dpjmv\": pod kube-proxy-dpjmv is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dpjmv" node="ha-797954-m04"
	E0920 21:18:56.280445       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dpjmv\": pod kube-proxy-dpjmv is already assigned to node \"ha-797954-m04\"" pod="kube-system/kube-proxy-dpjmv"
	E0920 21:18:56.401044       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-r4kzr\": pod kindnet-r4kzr is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-r4kzr" node="ha-797954-m04"
	E0920 21:18:56.401344       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e53077ca-c9f3-433c-8f93-766336e4a9ab(kube-system/kindnet-r4kzr) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-r4kzr"
	E0920 21:18:56.401460       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-r4kzr\": pod kindnet-r4kzr is already assigned to node \"ha-797954-m04\"" pod="kube-system/kindnet-r4kzr"
	I0920 21:18:56.401669       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-r4kzr" node="ha-797954-m04"
	
	
	==> kubelet <==
	Sep 20 21:20:41 ha-797954 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 21:20:41 ha-797954 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 21:20:41 ha-797954 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:20:41 ha-797954 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:20:41 ha-797954 kubelet[1315]: E0920 21:20:41.280669    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867241280234372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:20:41 ha-797954 kubelet[1315]: E0920 21:20:41.280717    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867241280234372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:20:51 ha-797954 kubelet[1315]: E0920 21:20:51.282414    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867251282014880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:20:51 ha-797954 kubelet[1315]: E0920 21:20:51.282759    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867251282014880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:01 ha-797954 kubelet[1315]: E0920 21:21:01.284298    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867261283949905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:01 ha-797954 kubelet[1315]: E0920 21:21:01.284350    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867261283949905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:11 ha-797954 kubelet[1315]: E0920 21:21:11.286306    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867271285944996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:11 ha-797954 kubelet[1315]: E0920 21:21:11.286783    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867271285944996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:21 ha-797954 kubelet[1315]: E0920 21:21:21.289176    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867281288672833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:21 ha-797954 kubelet[1315]: E0920 21:21:21.289509    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867281288672833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:31 ha-797954 kubelet[1315]: E0920 21:21:31.293560    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867291292385143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:31 ha-797954 kubelet[1315]: E0920 21:21:31.293612    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867291292385143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:41 ha-797954 kubelet[1315]: E0920 21:21:41.149114    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 21:21:41 ha-797954 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 21:21:41 ha-797954 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 21:21:41 ha-797954 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:21:41 ha-797954 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:21:41 ha-797954 kubelet[1315]: E0920 21:21:41.295407    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867301295116540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:41 ha-797954 kubelet[1315]: E0920 21:21:41.295545    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867301295116540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:51 ha-797954 kubelet[1315]: E0920 21:21:51.297121    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867311296832439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:51 ha-797954 kubelet[1315]: E0920 21:21:51.297162    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867311296832439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-797954 -n ha-797954
helpers_test.go:261: (dbg) Run:  kubectl --context ha-797954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.401491534s)
ha_test.go:413: expected profile "ha-797954" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-797954\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-797954\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-797954\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.240\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.63\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.108\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.210\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\
"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\
":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-797954 -n ha-797954
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-797954 logs -n 25: (1.450461401s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3341505942/001/cp-test_ha-797954-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954:/home/docker/cp-test_ha-797954-m03_ha-797954.txt                       |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954 sudo cat                                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m03_ha-797954.txt                                 |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m02:/home/docker/cp-test_ha-797954-m03_ha-797954-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m02 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m03_ha-797954-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04:/home/docker/cp-test_ha-797954-m03_ha-797954-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m04 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m03_ha-797954-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp testdata/cp-test.txt                                                | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3341505942/001/cp-test_ha-797954-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954:/home/docker/cp-test_ha-797954-m04_ha-797954.txt                       |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954 sudo cat                                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954.txt                                 |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m02:/home/docker/cp-test_ha-797954-m04_ha-797954-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m02 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03:/home/docker/cp-test_ha-797954-m04_ha-797954-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m03 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-797954 node stop m02 -v=7                                                     | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 21:14:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 21:14:58.093383   29546 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:14:58.093518   29546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:14:58.093529   29546 out.go:358] Setting ErrFile to fd 2...
	I0920 21:14:58.093536   29546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:14:58.093721   29546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 21:14:58.094299   29546 out.go:352] Setting JSON to false
	I0920 21:14:58.095137   29546 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3446,"bootTime":1726863452,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:14:58.095237   29546 start.go:139] virtualization: kvm guest
	I0920 21:14:58.097575   29546 out.go:177] * [ha-797954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 21:14:58.099174   29546 notify.go:220] Checking for updates...
	I0920 21:14:58.099184   29546 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:14:58.100699   29546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:14:58.102318   29546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:14:58.103731   29546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:14:58.105067   29546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:14:58.106539   29546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:14:58.108120   29546 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:14:58.144556   29546 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 21:14:58.146098   29546 start.go:297] selected driver: kvm2
	I0920 21:14:58.146111   29546 start.go:901] validating driver "kvm2" against <nil>
	I0920 21:14:58.146123   29546 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:14:58.146850   29546 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:14:58.146917   29546 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 21:14:58.162003   29546 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 21:14:58.162056   29546 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 21:14:58.162303   29546 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 21:14:58.162329   29546 cni.go:84] Creating CNI manager for ""
	I0920 21:14:58.162373   29546 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 21:14:58.162381   29546 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 21:14:58.162430   29546 start.go:340] cluster config:
	{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0920 21:14:58.162516   29546 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:14:58.164481   29546 out.go:177] * Starting "ha-797954" primary control-plane node in "ha-797954" cluster
	I0920 21:14:58.165856   29546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:14:58.165884   29546 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 21:14:58.165902   29546 cache.go:56] Caching tarball of preloaded images
	I0920 21:14:58.165974   29546 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 21:14:58.165984   29546 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 21:14:58.166298   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:14:58.166316   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json: {Name:mk8fa57fb542c3761d64832828491b92c6ccc2fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:14:58.166454   29546 start.go:360] acquireMachinesLock for ha-797954: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 21:14:58.166482   29546 start.go:364] duration metric: took 15.486µs to acquireMachinesLock for "ha-797954"
	I0920 21:14:58.166497   29546 start.go:93] Provisioning new machine with config: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:14:58.166552   29546 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 21:14:58.168264   29546 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 21:14:58.168395   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:14:58.168428   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:14:58.182843   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I0920 21:14:58.183355   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:14:58.183917   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:14:58.183941   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:14:58.184213   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:14:58.184397   29546 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:14:58.184524   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:14:58.184653   29546 start.go:159] libmachine.API.Create for "ha-797954" (driver="kvm2")
	I0920 21:14:58.184689   29546 client.go:168] LocalClient.Create starting
	I0920 21:14:58.184716   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 21:14:58.184749   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:14:58.184762   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:14:58.184806   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 21:14:58.184823   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:14:58.184836   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:14:58.184851   29546 main.go:141] libmachine: Running pre-create checks...
	I0920 21:14:58.184863   29546 main.go:141] libmachine: (ha-797954) Calling .PreCreateCheck
	I0920 21:14:58.185178   29546 main.go:141] libmachine: (ha-797954) Calling .GetConfigRaw
	I0920 21:14:58.185555   29546 main.go:141] libmachine: Creating machine...
	I0920 21:14:58.185568   29546 main.go:141] libmachine: (ha-797954) Calling .Create
	I0920 21:14:58.185703   29546 main.go:141] libmachine: (ha-797954) Creating KVM machine...
	I0920 21:14:58.186926   29546 main.go:141] libmachine: (ha-797954) DBG | found existing default KVM network
	I0920 21:14:58.187682   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.187531   29569 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0920 21:14:58.187702   29546 main.go:141] libmachine: (ha-797954) DBG | created network xml: 
	I0920 21:14:58.187715   29546 main.go:141] libmachine: (ha-797954) DBG | <network>
	I0920 21:14:58.187726   29546 main.go:141] libmachine: (ha-797954) DBG |   <name>mk-ha-797954</name>
	I0920 21:14:58.187736   29546 main.go:141] libmachine: (ha-797954) DBG |   <dns enable='no'/>
	I0920 21:14:58.187745   29546 main.go:141] libmachine: (ha-797954) DBG |   
	I0920 21:14:58.187755   29546 main.go:141] libmachine: (ha-797954) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 21:14:58.187763   29546 main.go:141] libmachine: (ha-797954) DBG |     <dhcp>
	I0920 21:14:58.187769   29546 main.go:141] libmachine: (ha-797954) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 21:14:58.187775   29546 main.go:141] libmachine: (ha-797954) DBG |     </dhcp>
	I0920 21:14:58.187783   29546 main.go:141] libmachine: (ha-797954) DBG |   </ip>
	I0920 21:14:58.187799   29546 main.go:141] libmachine: (ha-797954) DBG |   
	I0920 21:14:58.187840   29546 main.go:141] libmachine: (ha-797954) DBG | </network>
	I0920 21:14:58.187855   29546 main.go:141] libmachine: (ha-797954) DBG | 
	I0920 21:14:58.192877   29546 main.go:141] libmachine: (ha-797954) DBG | trying to create private KVM network mk-ha-797954 192.168.39.0/24...
	I0920 21:14:58.259633   29546 main.go:141] libmachine: (ha-797954) DBG | private KVM network mk-ha-797954 192.168.39.0/24 created
	I0920 21:14:58.259663   29546 main.go:141] libmachine: (ha-797954) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954 ...
	I0920 21:14:58.259676   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.259606   29569 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:14:58.259694   29546 main.go:141] libmachine: (ha-797954) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 21:14:58.259758   29546 main.go:141] libmachine: (ha-797954) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 21:14:58.505872   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.505739   29569 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa...
	I0920 21:14:58.596789   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.596635   29569 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/ha-797954.rawdisk...
	I0920 21:14:58.596822   29546 main.go:141] libmachine: (ha-797954) DBG | Writing magic tar header
	I0920 21:14:58.596834   29546 main.go:141] libmachine: (ha-797954) DBG | Writing SSH key tar header
	I0920 21:14:58.596844   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.596754   29569 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954 ...
	I0920 21:14:58.596858   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954
	I0920 21:14:58.596904   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954 (perms=drwx------)
	I0920 21:14:58.596921   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 21:14:58.596928   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 21:14:58.596967   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:14:58.596981   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 21:14:58.596994   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 21:14:58.597010   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 21:14:58.597027   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 21:14:58.597042   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 21:14:58.597053   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins
	I0920 21:14:58.597060   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home
	I0920 21:14:58.597068   29546 main.go:141] libmachine: (ha-797954) DBG | Skipping /home - not owner
	I0920 21:14:58.597077   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 21:14:58.597083   29546 main.go:141] libmachine: (ha-797954) Creating domain...
	I0920 21:14:58.598143   29546 main.go:141] libmachine: (ha-797954) define libvirt domain using xml: 
	I0920 21:14:58.598171   29546 main.go:141] libmachine: (ha-797954) <domain type='kvm'>
	I0920 21:14:58.598181   29546 main.go:141] libmachine: (ha-797954)   <name>ha-797954</name>
	I0920 21:14:58.598192   29546 main.go:141] libmachine: (ha-797954)   <memory unit='MiB'>2200</memory>
	I0920 21:14:58.598202   29546 main.go:141] libmachine: (ha-797954)   <vcpu>2</vcpu>
	I0920 21:14:58.598215   29546 main.go:141] libmachine: (ha-797954)   <features>
	I0920 21:14:58.598243   29546 main.go:141] libmachine: (ha-797954)     <acpi/>
	I0920 21:14:58.598261   29546 main.go:141] libmachine: (ha-797954)     <apic/>
	I0920 21:14:58.598269   29546 main.go:141] libmachine: (ha-797954)     <pae/>
	I0920 21:14:58.598292   29546 main.go:141] libmachine: (ha-797954)     
	I0920 21:14:58.598311   29546 main.go:141] libmachine: (ha-797954)   </features>
	I0920 21:14:58.598326   29546 main.go:141] libmachine: (ha-797954)   <cpu mode='host-passthrough'>
	I0920 21:14:58.598354   29546 main.go:141] libmachine: (ha-797954)   
	I0920 21:14:58.598373   29546 main.go:141] libmachine: (ha-797954)   </cpu>
	I0920 21:14:58.598388   29546 main.go:141] libmachine: (ha-797954)   <os>
	I0920 21:14:58.598403   29546 main.go:141] libmachine: (ha-797954)     <type>hvm</type>
	I0920 21:14:58.598417   29546 main.go:141] libmachine: (ha-797954)     <boot dev='cdrom'/>
	I0920 21:14:58.598424   29546 main.go:141] libmachine: (ha-797954)     <boot dev='hd'/>
	I0920 21:14:58.598430   29546 main.go:141] libmachine: (ha-797954)     <bootmenu enable='no'/>
	I0920 21:14:58.598435   29546 main.go:141] libmachine: (ha-797954)   </os>
	I0920 21:14:58.598441   29546 main.go:141] libmachine: (ha-797954)   <devices>
	I0920 21:14:58.598451   29546 main.go:141] libmachine: (ha-797954)     <disk type='file' device='cdrom'>
	I0920 21:14:58.598467   29546 main.go:141] libmachine: (ha-797954)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/boot2docker.iso'/>
	I0920 21:14:58.598482   29546 main.go:141] libmachine: (ha-797954)       <target dev='hdc' bus='scsi'/>
	I0920 21:14:58.598498   29546 main.go:141] libmachine: (ha-797954)       <readonly/>
	I0920 21:14:58.598520   29546 main.go:141] libmachine: (ha-797954)     </disk>
	I0920 21:14:58.598530   29546 main.go:141] libmachine: (ha-797954)     <disk type='file' device='disk'>
	I0920 21:14:58.598537   29546 main.go:141] libmachine: (ha-797954)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 21:14:58.598552   29546 main.go:141] libmachine: (ha-797954)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/ha-797954.rawdisk'/>
	I0920 21:14:58.598567   29546 main.go:141] libmachine: (ha-797954)       <target dev='hda' bus='virtio'/>
	I0920 21:14:58.598578   29546 main.go:141] libmachine: (ha-797954)     </disk>
	I0920 21:14:58.598592   29546 main.go:141] libmachine: (ha-797954)     <interface type='network'>
	I0920 21:14:58.598603   29546 main.go:141] libmachine: (ha-797954)       <source network='mk-ha-797954'/>
	I0920 21:14:58.598612   29546 main.go:141] libmachine: (ha-797954)       <model type='virtio'/>
	I0920 21:14:58.598623   29546 main.go:141] libmachine: (ha-797954)     </interface>
	I0920 21:14:58.598631   29546 main.go:141] libmachine: (ha-797954)     <interface type='network'>
	I0920 21:14:58.598639   29546 main.go:141] libmachine: (ha-797954)       <source network='default'/>
	I0920 21:14:58.598650   29546 main.go:141] libmachine: (ha-797954)       <model type='virtio'/>
	I0920 21:14:58.598658   29546 main.go:141] libmachine: (ha-797954)     </interface>
	I0920 21:14:58.598668   29546 main.go:141] libmachine: (ha-797954)     <serial type='pty'>
	I0920 21:14:58.598676   29546 main.go:141] libmachine: (ha-797954)       <target port='0'/>
	I0920 21:14:58.598685   29546 main.go:141] libmachine: (ha-797954)     </serial>
	I0920 21:14:58.598696   29546 main.go:141] libmachine: (ha-797954)     <console type='pty'>
	I0920 21:14:58.598706   29546 main.go:141] libmachine: (ha-797954)       <target type='serial' port='0'/>
	I0920 21:14:58.598718   29546 main.go:141] libmachine: (ha-797954)     </console>
	I0920 21:14:58.598725   29546 main.go:141] libmachine: (ha-797954)     <rng model='virtio'>
	I0920 21:14:58.598738   29546 main.go:141] libmachine: (ha-797954)       <backend model='random'>/dev/random</backend>
	I0920 21:14:58.598754   29546 main.go:141] libmachine: (ha-797954)     </rng>
	I0920 21:14:58.598766   29546 main.go:141] libmachine: (ha-797954)     
	I0920 21:14:58.598780   29546 main.go:141] libmachine: (ha-797954)     
	I0920 21:14:58.598790   29546 main.go:141] libmachine: (ha-797954)   </devices>
	I0920 21:14:58.598799   29546 main.go:141] libmachine: (ha-797954) </domain>
	I0920 21:14:58.598807   29546 main.go:141] libmachine: (ha-797954) 
	I0920 21:14:58.603113   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:c6:03:52 in network default
	I0920 21:14:58.603707   29546 main.go:141] libmachine: (ha-797954) Ensuring networks are active...
	I0920 21:14:58.603742   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:14:58.604393   29546 main.go:141] libmachine: (ha-797954) Ensuring network default is active
	I0920 21:14:58.604660   29546 main.go:141] libmachine: (ha-797954) Ensuring network mk-ha-797954 is active
	I0920 21:14:58.605088   29546 main.go:141] libmachine: (ha-797954) Getting domain xml...
	I0920 21:14:58.605647   29546 main.go:141] libmachine: (ha-797954) Creating domain...
	I0920 21:14:59.802448   29546 main.go:141] libmachine: (ha-797954) Waiting to get IP...
	I0920 21:14:59.803386   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:14:59.803845   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:14:59.803891   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:59.803834   29569 retry.go:31] will retry after 223.587991ms: waiting for machine to come up
	I0920 21:15:00.029437   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:00.029858   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:00.029879   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:00.029830   29569 retry.go:31] will retry after 354.177356ms: waiting for machine to come up
	I0920 21:15:00.385460   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:00.385795   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:00.385813   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:00.385762   29569 retry.go:31] will retry after 379.865741ms: waiting for machine to come up
	I0920 21:15:00.767361   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:00.767813   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:00.767841   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:00.767778   29569 retry.go:31] will retry after 505.320471ms: waiting for machine to come up
	I0920 21:15:01.274395   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:01.274767   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:01.274801   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:01.274739   29569 retry.go:31] will retry after 590.453192ms: waiting for machine to come up
	I0920 21:15:01.866507   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:01.866905   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:01.866927   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:01.866860   29569 retry.go:31] will retry after 645.79905ms: waiting for machine to come up
	I0920 21:15:02.514921   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:02.515518   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:02.515553   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:02.515465   29569 retry.go:31] will retry after 946.798883ms: waiting for machine to come up
	I0920 21:15:03.464250   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:03.464679   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:03.464704   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:03.464640   29569 retry.go:31] will retry after 983.162198ms: waiting for machine to come up
	I0920 21:15:04.450382   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:04.450835   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:04.450858   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:04.450809   29569 retry.go:31] will retry after 1.725909532s: waiting for machine to come up
	I0920 21:15:06.178808   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:06.179200   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:06.179226   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:06.179150   29569 retry.go:31] will retry after 2.07990615s: waiting for machine to come up
	I0920 21:15:08.260549   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:08.261003   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:08.261030   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:08.260968   29569 retry.go:31] will retry after 2.83157909s: waiting for machine to come up
	I0920 21:15:11.093814   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:11.094191   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:11.094225   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:11.094158   29569 retry.go:31] will retry after 3.472040975s: waiting for machine to come up
	I0920 21:15:14.569242   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:14.569658   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:14.569683   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:14.569603   29569 retry.go:31] will retry after 2.791417275s: waiting for machine to come up
	I0920 21:15:17.364625   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:17.365099   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:17.365116   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:17.365066   29569 retry.go:31] will retry after 5.415425974s: waiting for machine to come up
	I0920 21:15:22.785521   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.786004   29546 main.go:141] libmachine: (ha-797954) Found IP for machine: 192.168.39.240
	I0920 21:15:22.786028   29546 main.go:141] libmachine: (ha-797954) Reserving static IP address...
	I0920 21:15:22.786041   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has current primary IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.786330   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find host DHCP lease matching {name: "ha-797954", mac: "52:54:00:62:bb:49", ip: "192.168.39.240"} in network mk-ha-797954
	I0920 21:15:22.857285   29546 main.go:141] libmachine: (ha-797954) Reserved static IP address: 192.168.39.240
	I0920 21:15:22.857326   29546 main.go:141] libmachine: (ha-797954) Waiting for SSH to be available...
	I0920 21:15:22.857348   29546 main.go:141] libmachine: (ha-797954) DBG | Getting to WaitForSSH function...
	I0920 21:15:22.859900   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.860220   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:22.860240   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.860390   29546 main.go:141] libmachine: (ha-797954) DBG | Using SSH client type: external
	I0920 21:15:22.860414   29546 main.go:141] libmachine: (ha-797954) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa (-rw-------)
	I0920 21:15:22.860439   29546 main.go:141] libmachine: (ha-797954) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:15:22.860456   29546 main.go:141] libmachine: (ha-797954) DBG | About to run SSH command:
	I0920 21:15:22.860469   29546 main.go:141] libmachine: (ha-797954) DBG | exit 0
	I0920 21:15:22.987393   29546 main.go:141] libmachine: (ha-797954) DBG | SSH cmd err, output: <nil>: 
	I0920 21:15:22.987618   29546 main.go:141] libmachine: (ha-797954) KVM machine creation complete!
	I0920 21:15:22.987926   29546 main.go:141] libmachine: (ha-797954) Calling .GetConfigRaw
	I0920 21:15:22.988498   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:22.988678   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:22.988799   29546 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 21:15:22.988815   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:15:22.990056   29546 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 21:15:22.990073   29546 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 21:15:22.990078   29546 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 21:15:22.990084   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:22.992361   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.992791   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:22.992813   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.992947   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:22.993123   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:22.993226   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:22.993339   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:22.993444   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:22.993635   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:22.993647   29546 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 21:15:23.098405   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:15:23.098428   29546 main.go:141] libmachine: Detecting the provisioner...
	I0920 21:15:23.098437   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.101054   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.101432   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.101460   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.101637   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.101840   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.101988   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.102095   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.102248   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:23.102456   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:23.102471   29546 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 21:15:23.207907   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 21:15:23.208033   29546 main.go:141] libmachine: found compatible host: buildroot
	I0920 21:15:23.208051   29546 main.go:141] libmachine: Provisioning with buildroot...
	I0920 21:15:23.208064   29546 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:15:23.208298   29546 buildroot.go:166] provisioning hostname "ha-797954"
	I0920 21:15:23.208319   29546 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:15:23.208502   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.211194   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.211511   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.211538   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.211661   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.211829   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.212009   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.212107   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.212272   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:23.212442   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:23.212456   29546 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-797954 && echo "ha-797954" | sudo tee /etc/hostname
	I0920 21:15:23.333898   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-797954
	
	I0920 21:15:23.333955   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.336557   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.337630   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.337652   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.337927   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.338114   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.338300   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.338451   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.338601   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:23.338766   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:23.338781   29546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-797954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-797954/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-797954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 21:15:23.457012   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:15:23.457045   29546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 21:15:23.457068   29546 buildroot.go:174] setting up certificates
	I0920 21:15:23.457076   29546 provision.go:84] configureAuth start
	I0920 21:15:23.457084   29546 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:15:23.457358   29546 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:15:23.460274   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.460523   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.460544   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.460743   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.463346   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.463695   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.463721   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.463823   29546 provision.go:143] copyHostCerts
	I0920 21:15:23.463873   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:15:23.463907   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 21:15:23.463922   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:15:23.464005   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 21:15:23.464111   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:15:23.464141   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 21:15:23.464151   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:15:23.464195   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 21:15:23.464272   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:15:23.464310   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 21:15:23.464319   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:15:23.464355   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 21:15:23.464469   29546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.ha-797954 san=[127.0.0.1 192.168.39.240 ha-797954 localhost minikube]
	I0920 21:15:23.729781   29546 provision.go:177] copyRemoteCerts
	I0920 21:15:23.729854   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 21:15:23.729878   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.732636   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.733071   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.733098   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.733347   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.733569   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.733717   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.733849   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:23.819565   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 21:15:23.819631   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 21:15:23.849592   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 21:15:23.849659   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 21:15:23.875769   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 21:15:23.875849   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 21:15:23.901303   29546 provision.go:87] duration metric: took 444.215684ms to configureAuth
	I0920 21:15:23.901335   29546 buildroot.go:189] setting minikube options for container-runtime
	I0920 21:15:23.901537   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:15:23.901620   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.904924   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.905466   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.905493   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.905749   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.905969   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.906138   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.906294   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.906464   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:23.906655   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:23.906678   29546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 21:15:24.134358   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 21:15:24.134386   29546 main.go:141] libmachine: Checking connection to Docker...
	I0920 21:15:24.134394   29546 main.go:141] libmachine: (ha-797954) Calling .GetURL
	I0920 21:15:24.135777   29546 main.go:141] libmachine: (ha-797954) DBG | Using libvirt version 6000000
	I0920 21:15:24.138158   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.138556   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.138587   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.138722   29546 main.go:141] libmachine: Docker is up and running!
	I0920 21:15:24.138737   29546 main.go:141] libmachine: Reticulating splines...
	I0920 21:15:24.138745   29546 client.go:171] duration metric: took 25.95404705s to LocalClient.Create
	I0920 21:15:24.138772   29546 start.go:167] duration metric: took 25.954119175s to libmachine.API.Create "ha-797954"
	I0920 21:15:24.138783   29546 start.go:293] postStartSetup for "ha-797954" (driver="kvm2")
	I0920 21:15:24.138798   29546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 21:15:24.138818   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.139051   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 21:15:24.139076   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:24.140985   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.141346   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.141374   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.141499   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:24.141674   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.141828   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:24.142019   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:24.225959   29546 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 21:15:24.230124   29546 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 21:15:24.230144   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 21:15:24.230206   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 21:15:24.230307   29546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 21:15:24.230322   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /etc/ssl/certs/167852.pem
	I0920 21:15:24.230441   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 21:15:24.240575   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:15:24.264696   29546 start.go:296] duration metric: took 125.900056ms for postStartSetup
	I0920 21:15:24.264741   29546 main.go:141] libmachine: (ha-797954) Calling .GetConfigRaw
	I0920 21:15:24.265475   29546 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:15:24.267916   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.268228   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.268259   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.268458   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:15:24.268638   29546 start.go:128] duration metric: took 26.102077231s to createHost
	I0920 21:15:24.268658   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:24.270797   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.271111   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.271130   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.271302   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:24.271452   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.271594   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.271702   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:24.271838   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:24.272013   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:24.272027   29546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 21:15:24.379993   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726866924.358104924
	
	I0920 21:15:24.380018   29546 fix.go:216] guest clock: 1726866924.358104924
	I0920 21:15:24.380025   29546 fix.go:229] Guest: 2024-09-20 21:15:24.358104924 +0000 UTC Remote: 2024-09-20 21:15:24.268649465 +0000 UTC m=+26.209850110 (delta=89.455459ms)
	I0920 21:15:24.380055   29546 fix.go:200] guest clock delta is within tolerance: 89.455459ms
	I0920 21:15:24.380059   29546 start.go:83] releasing machines lock for "ha-797954", held for 26.213569983s
	I0920 21:15:24.380077   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.380328   29546 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:15:24.382878   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.383258   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.383280   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.383441   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.383944   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.384137   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.384211   29546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 21:15:24.384265   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:24.384357   29546 ssh_runner.go:195] Run: cat /version.json
	I0920 21:15:24.384382   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:24.386758   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.386983   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.387156   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.387178   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.387338   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:24.387455   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.387479   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.387524   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.387629   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:24.387698   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:24.387766   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.387832   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:24.387879   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:24.388006   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:24.464447   29546 ssh_runner.go:195] Run: systemctl --version
	I0920 21:15:24.489896   29546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 21:15:24.649570   29546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 21:15:24.656389   29546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 21:15:24.656455   29546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 21:15:24.672721   29546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 21:15:24.672744   29546 start.go:495] detecting cgroup driver to use...
	I0920 21:15:24.672817   29546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 21:15:24.689886   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 21:15:24.704352   29546 docker.go:217] disabling cri-docker service (if available) ...
	I0920 21:15:24.704400   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 21:15:24.718653   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 21:15:24.733865   29546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 21:15:24.855165   29546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 21:15:25.006541   29546 docker.go:233] disabling docker service ...
	I0920 21:15:25.006639   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 21:15:25.021286   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 21:15:25.034719   29546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 21:15:25.179842   29546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 21:15:25.303409   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 21:15:25.317836   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 21:15:25.337433   29546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 21:15:25.337507   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.348743   29546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 21:15:25.348803   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.359907   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.370733   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.381300   29546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 21:15:25.392436   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.403057   29546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.420657   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.430817   29546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 21:15:25.440277   29546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 21:15:25.440345   29546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 21:15:25.453579   29546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 21:15:25.463440   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:15:25.581550   29546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 21:15:25.681183   29546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 21:15:25.681259   29546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 21:15:25.686062   29546 start.go:563] Will wait 60s for crictl version
	I0920 21:15:25.686107   29546 ssh_runner.go:195] Run: which crictl
	I0920 21:15:25.690069   29546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 21:15:25.733819   29546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 21:15:25.733901   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:15:25.764147   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:15:25.795214   29546 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 21:15:25.796911   29546 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:15:25.799674   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:25.800015   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:25.800052   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:25.800321   29546 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 21:15:25.804808   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:15:25.818394   29546 kubeadm.go:883] updating cluster {Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 21:15:25.818488   29546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:15:25.818527   29546 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:15:25.850610   29546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 21:15:25.850709   29546 ssh_runner.go:195] Run: which lz4
	I0920 21:15:25.854671   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0920 21:15:25.854769   29546 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 21:15:25.859047   29546 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 21:15:25.859084   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 21:15:27.274564   29546 crio.go:462] duration metric: took 1.41982381s to copy over tarball
	I0920 21:15:27.274662   29546 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 21:15:29.326906   29546 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.052207909s)
	I0920 21:15:29.326937   29546 crio.go:469] duration metric: took 2.052348508s to extract the tarball
	I0920 21:15:29.326946   29546 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 21:15:29.364795   29546 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:15:29.420778   29546 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 21:15:29.420807   29546 cache_images.go:84] Images are preloaded, skipping loading
	I0920 21:15:29.420830   29546 kubeadm.go:934] updating node { 192.168.39.240 8443 v1.31.1 crio true true} ...
	I0920 21:15:29.420939   29546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-797954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 21:15:29.421019   29546 ssh_runner.go:195] Run: crio config
	I0920 21:15:29.467359   29546 cni.go:84] Creating CNI manager for ""
	I0920 21:15:29.467388   29546 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 21:15:29.467398   29546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 21:15:29.467423   29546 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-797954 NodeName:ha-797954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 21:15:29.467586   29546 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-797954"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 21:15:29.467615   29546 kube-vip.go:115] generating kube-vip config ...
	I0920 21:15:29.467665   29546 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 21:15:29.486323   29546 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 21:15:29.486445   29546 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0920 21:15:29.486507   29546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 21:15:29.497349   29546 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 21:15:29.497421   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 21:15:29.507626   29546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 21:15:29.524821   29546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 21:15:29.542162   29546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 21:15:29.559146   29546 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0920 21:15:29.575893   29546 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 21:15:29.579898   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:15:29.592448   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:15:29.726203   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:15:29.743307   29546 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954 for IP: 192.168.39.240
	I0920 21:15:29.743326   29546 certs.go:194] generating shared ca certs ...
	I0920 21:15:29.743341   29546 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.743501   29546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 21:15:29.743558   29546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 21:15:29.743572   29546 certs.go:256] generating profile certs ...
	I0920 21:15:29.743620   29546 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key
	I0920 21:15:29.743648   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt with IP's: []
	I0920 21:15:29.822848   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt ...
	I0920 21:15:29.822877   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt: {Name:mk9531e4d2ca25e31047a503b6559591e4241e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.823097   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key ...
	I0920 21:15:29.823113   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key: {Name:mk780710aa75c203f633d5fd9de901af5921b430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.823230   29546 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.e66f4b1a
	I0920 21:15:29.823254   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.e66f4b1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.254]
	I0920 21:15:29.947576   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.e66f4b1a ...
	I0920 21:15:29.947608   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.e66f4b1a: {Name:mk184aed753f9fcdadd87be1874f607263a47d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.947790   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.e66f4b1a ...
	I0920 21:15:29.947809   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.e66f4b1a: {Name:mke29d551b425a7cf297f097b7c968e0ec8e9532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.947904   29546 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.e66f4b1a -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt
	I0920 21:15:29.948003   29546 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.e66f4b1a -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key
	I0920 21:15:29.948077   29546 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key
	I0920 21:15:29.948097   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt with IP's: []
	I0920 21:15:30.056624   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt ...
	I0920 21:15:30.056653   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt: {Name:mk2dc3b0b0385ac4b3979e15bd90a0ff1e5ef2a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:30.056825   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key ...
	I0920 21:15:30.056842   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key: {Name:mkbe53da9f2669d754e44e4038f6b341d45e8e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:30.056936   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 21:15:30.056960   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 21:15:30.056983   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 21:15:30.057001   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 21:15:30.057014   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 21:15:30.057028   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 21:15:30.057044   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 21:15:30.057067   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 21:15:30.057124   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 21:15:30.057165   29546 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 21:15:30.057179   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 21:15:30.057219   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 21:15:30.057249   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 21:15:30.057279   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 21:15:30.057335   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:15:30.057375   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /usr/share/ca-certificates/167852.pem
	I0920 21:15:30.057397   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:15:30.057417   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem -> /usr/share/ca-certificates/16785.pem
	I0920 21:15:30.058000   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 21:15:30.084548   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 21:15:30.110347   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 21:15:30.134953   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 21:15:30.160708   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 21:15:30.184711   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 21:15:30.208901   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 21:15:30.233498   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 21:15:30.258374   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 21:15:30.283305   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 21:15:30.307934   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 21:15:30.331963   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 21:15:30.348398   29546 ssh_runner.go:195] Run: openssl version
	I0920 21:15:30.353975   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 21:15:30.365091   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 21:15:30.369663   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:15:30.369715   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 21:15:30.375452   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 21:15:30.386174   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 21:15:30.397261   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:15:30.401699   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:15:30.401757   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:15:30.407395   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 21:15:30.418060   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 21:15:30.429189   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 21:15:30.434030   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:15:30.434098   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 21:15:30.440220   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 21:15:30.451523   29546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:15:30.455873   29546 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 21:15:30.455921   29546 kubeadm.go:392] StartCluster: {Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:15:30.455994   29546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 21:15:30.456043   29546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 21:15:30.494216   29546 cri.go:89] found id: ""
	I0920 21:15:30.494275   29546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 21:15:30.504666   29546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 21:15:30.517367   29546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 21:15:30.527919   29546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 21:15:30.527949   29546 kubeadm.go:157] found existing configuration files:
	
	I0920 21:15:30.528010   29546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 21:15:30.538805   29546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 21:15:30.538875   29546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 21:15:30.550872   29546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 21:15:30.562292   29546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 21:15:30.562344   29546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 21:15:30.573641   29546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 21:15:30.586772   29546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 21:15:30.586843   29546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 21:15:30.596548   29546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 21:15:30.605570   29546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 21:15:30.605630   29546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 21:15:30.615094   29546 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 21:15:30.716781   29546 kubeadm.go:310] W0920 21:15:30.700841     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 21:15:30.717889   29546 kubeadm.go:310] W0920 21:15:30.702167     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 21:15:30.826716   29546 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 21:15:41.799746   29546 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 21:15:41.799822   29546 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 21:15:41.799918   29546 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 21:15:41.800049   29546 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 21:15:41.800185   29546 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 21:15:41.800260   29546 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 21:15:41.801703   29546 out.go:235]   - Generating certificates and keys ...
	I0920 21:15:41.801811   29546 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 21:15:41.801875   29546 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 21:15:41.801961   29546 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 21:15:41.802058   29546 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 21:15:41.802140   29546 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 21:15:41.802227   29546 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 21:15:41.802292   29546 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 21:15:41.802387   29546 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-797954 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0920 21:15:41.802435   29546 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 21:15:41.802542   29546 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-797954 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0920 21:15:41.802632   29546 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 21:15:41.802717   29546 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 21:15:41.802782   29546 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 21:15:41.802850   29546 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 21:15:41.802936   29546 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 21:15:41.803041   29546 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 21:15:41.803118   29546 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 21:15:41.803204   29546 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 21:15:41.803284   29546 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 21:15:41.803389   29546 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 21:15:41.803478   29546 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 21:15:41.804945   29546 out.go:235]   - Booting up control plane ...
	I0920 21:15:41.805074   29546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 21:15:41.805179   29546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 21:15:41.805284   29546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 21:15:41.805403   29546 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 21:15:41.805511   29546 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 21:15:41.805574   29546 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 21:15:41.805717   29546 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 21:15:41.805855   29546 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 21:15:41.805933   29546 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.138308ms
	I0920 21:15:41.806037   29546 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 21:15:41.806112   29546 kubeadm.go:310] [api-check] The API server is healthy after 6.067641334s
	I0920 21:15:41.806250   29546 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 21:15:41.806400   29546 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 21:15:41.806458   29546 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 21:15:41.806622   29546 kubeadm.go:310] [mark-control-plane] Marking the node ha-797954 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 21:15:41.806712   29546 kubeadm.go:310] [bootstrap-token] Using token: 5kg5kj.1p3ye5r3aa4bs2us
	I0920 21:15:41.808141   29546 out.go:235]   - Configuring RBAC rules ...
	I0920 21:15:41.808280   29546 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 21:15:41.808399   29546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 21:15:41.808574   29546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 21:15:41.808752   29546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 21:15:41.808897   29546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 21:15:41.809018   29546 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 21:15:41.809193   29546 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 21:15:41.809271   29546 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 21:15:41.809343   29546 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 21:15:41.809355   29546 kubeadm.go:310] 
	I0920 21:15:41.809445   29546 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 21:15:41.809464   29546 kubeadm.go:310] 
	I0920 21:15:41.809580   29546 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 21:15:41.809592   29546 kubeadm.go:310] 
	I0920 21:15:41.809629   29546 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 21:15:41.809688   29546 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 21:15:41.809745   29546 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 21:15:41.809754   29546 kubeadm.go:310] 
	I0920 21:15:41.809832   29546 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 21:15:41.809843   29546 kubeadm.go:310] 
	I0920 21:15:41.809916   29546 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 21:15:41.809926   29546 kubeadm.go:310] 
	I0920 21:15:41.810006   29546 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 21:15:41.810120   29546 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 21:15:41.810222   29546 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 21:15:41.810233   29546 kubeadm.go:310] 
	I0920 21:15:41.810391   29546 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 21:15:41.810518   29546 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 21:15:41.810529   29546 kubeadm.go:310] 
	I0920 21:15:41.810655   29546 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5kg5kj.1p3ye5r3aa4bs2us \
	I0920 21:15:41.810800   29546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 21:15:41.810821   29546 kubeadm.go:310] 	--control-plane 
	I0920 21:15:41.810824   29546 kubeadm.go:310] 
	I0920 21:15:41.810951   29546 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 21:15:41.810988   29546 kubeadm.go:310] 
	I0920 21:15:41.811099   29546 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5kg5kj.1p3ye5r3aa4bs2us \
	I0920 21:15:41.811201   29546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 21:15:41.811212   29546 cni.go:84] Creating CNI manager for ""
	I0920 21:15:41.811218   29546 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 21:15:41.812792   29546 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 21:15:41.814273   29546 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 21:15:41.819866   29546 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 21:15:41.819883   29546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 21:15:41.845113   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 21:15:42.242872   29546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 21:15:42.242979   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:42.243015   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-797954 minikube.k8s.io/updated_at=2024_09_20T21_15_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=ha-797954 minikube.k8s.io/primary=true
	I0920 21:15:42.278969   29546 ops.go:34] apiserver oom_adj: -16
	I0920 21:15:42.447710   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:42.948122   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:43.448766   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:43.948784   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:44.448608   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:44.948709   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:45.448374   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:45.948002   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:46.073838   29546 kubeadm.go:1113] duration metric: took 3.830954868s to wait for elevateKubeSystemPrivileges
	I0920 21:15:46.073874   29546 kubeadm.go:394] duration metric: took 15.617956047s to StartCluster
	I0920 21:15:46.073900   29546 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:46.073984   29546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:15:46.074611   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:46.074804   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 21:15:46.074835   29546 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:15:46.074859   29546 start.go:241] waiting for startup goroutines ...
	I0920 21:15:46.074866   29546 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 21:15:46.074942   29546 addons.go:69] Setting storage-provisioner=true in profile "ha-797954"
	I0920 21:15:46.074955   29546 addons.go:234] Setting addon storage-provisioner=true in "ha-797954"
	I0920 21:15:46.074983   29546 addons.go:69] Setting default-storageclass=true in profile "ha-797954"
	I0920 21:15:46.075001   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:15:46.075002   29546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-797954"
	I0920 21:15:46.075035   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:15:46.075376   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.075406   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.075455   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.075491   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.090209   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I0920 21:15:46.090325   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45781
	I0920 21:15:46.090697   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.090729   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.091211   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.091226   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.091404   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.091444   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.091507   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.091772   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.091914   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:15:46.092087   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.092132   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.094002   29546 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:15:46.094264   29546 kapi.go:59] client config for ha-797954: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key", CAFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 21:15:46.094718   29546 cert_rotation.go:140] Starting client certificate rotation controller
	I0920 21:15:46.094872   29546 addons.go:234] Setting addon default-storageclass=true in "ha-797954"
	I0920 21:15:46.094901   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:15:46.095181   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.095211   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.107218   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I0920 21:15:46.107712   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.108266   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.108304   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.108613   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.108761   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:15:46.109409   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35265
	I0920 21:15:46.109837   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.110315   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.110337   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.110525   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:46.110662   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.111094   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.111132   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.112949   29546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 21:15:46.114785   29546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 21:15:46.114809   29546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 21:15:46.114835   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:46.118672   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:46.119149   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:46.119203   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:46.119483   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:46.119680   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:46.119847   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:46.119996   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:46.128013   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I0920 21:15:46.128448   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.128883   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.128912   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.129281   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.129455   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:15:46.130827   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:46.131041   29546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 21:15:46.131057   29546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 21:15:46.131075   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:46.133356   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:46.133727   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:46.133763   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:46.133887   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:46.134032   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:46.134204   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:46.134338   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:46.285450   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 21:15:46.287399   29546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 21:15:46.336627   29546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 21:15:46.881947   29546 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 21:15:47.160811   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.160830   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.160889   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.160910   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.161142   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.161165   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.161176   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.161184   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.161272   29546 main.go:141] libmachine: (ha-797954) DBG | Closing plugin on server side
	I0920 21:15:47.161305   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.161318   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.161334   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.161364   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.161474   29546 main.go:141] libmachine: (ha-797954) DBG | Closing plugin on server side
	I0920 21:15:47.161485   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.161517   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.161590   29546 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 21:15:47.161611   29546 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 21:15:47.161708   29546 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0920 21:15:47.161717   29546 round_trippers.go:469] Request Headers:
	I0920 21:15:47.161727   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:15:47.161733   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:15:47.162842   29546 main.go:141] libmachine: (ha-797954) DBG | Closing plugin on server side
	I0920 21:15:47.164042   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.164114   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.176849   29546 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0920 21:15:47.177419   29546 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0920 21:15:47.177432   29546 round_trippers.go:469] Request Headers:
	I0920 21:15:47.177439   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:15:47.177443   29546 round_trippers.go:473]     Content-Type: application/json
	I0920 21:15:47.177445   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:15:47.180165   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:15:47.180318   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.180329   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.180614   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.180655   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.180655   29546 main.go:141] libmachine: (ha-797954) DBG | Closing plugin on server side
	I0920 21:15:47.182546   29546 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 21:15:47.183737   29546 addons.go:510] duration metric: took 1.108869145s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 21:15:47.183771   29546 start.go:246] waiting for cluster config update ...
	I0920 21:15:47.183786   29546 start.go:255] writing updated cluster config ...
	I0920 21:15:47.185521   29546 out.go:201] 
	I0920 21:15:47.187156   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:15:47.187233   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:15:47.188857   29546 out.go:177] * Starting "ha-797954-m02" control-plane node in "ha-797954" cluster
	I0920 21:15:47.190069   29546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:15:47.190087   29546 cache.go:56] Caching tarball of preloaded images
	I0920 21:15:47.190183   29546 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 21:15:47.190195   29546 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 21:15:47.190259   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:15:47.190407   29546 start.go:360] acquireMachinesLock for ha-797954-m02: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 21:15:47.190448   29546 start.go:364] duration metric: took 22.919µs to acquireMachinesLock for "ha-797954-m02"
	I0920 21:15:47.190469   29546 start.go:93] Provisioning new machine with config: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:15:47.190532   29546 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0920 21:15:47.192163   29546 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 21:15:47.192261   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:47.192300   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:47.207012   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37487
	I0920 21:15:47.207450   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:47.207959   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:47.207987   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:47.208333   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:47.208512   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetMachineName
	I0920 21:15:47.208644   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:15:47.208810   29546 start.go:159] libmachine.API.Create for "ha-797954" (driver="kvm2")
	I0920 21:15:47.208842   29546 client.go:168] LocalClient.Create starting
	I0920 21:15:47.208877   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 21:15:47.208918   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:15:47.208938   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:15:47.209003   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 21:15:47.209029   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:15:47.209043   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:15:47.209069   29546 main.go:141] libmachine: Running pre-create checks...
	I0920 21:15:47.209080   29546 main.go:141] libmachine: (ha-797954-m02) Calling .PreCreateCheck
	I0920 21:15:47.209235   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetConfigRaw
	I0920 21:15:47.209601   29546 main.go:141] libmachine: Creating machine...
	I0920 21:15:47.209615   29546 main.go:141] libmachine: (ha-797954-m02) Calling .Create
	I0920 21:15:47.209743   29546 main.go:141] libmachine: (ha-797954-m02) Creating KVM machine...
	I0920 21:15:47.211023   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found existing default KVM network
	I0920 21:15:47.211192   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found existing private KVM network mk-ha-797954
	I0920 21:15:47.211306   29546 main.go:141] libmachine: (ha-797954-m02) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02 ...
	I0920 21:15:47.211329   29546 main.go:141] libmachine: (ha-797954-m02) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 21:15:47.211404   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:47.211303   29933 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:15:47.211513   29546 main.go:141] libmachine: (ha-797954-m02) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 21:15:47.448349   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:47.448214   29933 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa...
	I0920 21:15:47.615677   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:47.615554   29933 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/ha-797954-m02.rawdisk...
	I0920 21:15:47.615707   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Writing magic tar header
	I0920 21:15:47.615718   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Writing SSH key tar header
	I0920 21:15:47.615726   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:47.615660   29933 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02 ...
	I0920 21:15:47.615794   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02
	I0920 21:15:47.615812   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02 (perms=drwx------)
	I0920 21:15:47.615820   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 21:15:47.615834   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:15:47.615839   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 21:15:47.615846   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 21:15:47.615854   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 21:15:47.615871   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins
	I0920 21:15:47.615884   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home
	I0920 21:15:47.615890   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Skipping /home - not owner
	I0920 21:15:47.615899   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 21:15:47.615913   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 21:15:47.615923   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 21:15:47.615931   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 21:15:47.615945   29546 main.go:141] libmachine: (ha-797954-m02) Creating domain...
	I0920 21:15:47.617011   29546 main.go:141] libmachine: (ha-797954-m02) define libvirt domain using xml: 
	I0920 21:15:47.617032   29546 main.go:141] libmachine: (ha-797954-m02) <domain type='kvm'>
	I0920 21:15:47.617040   29546 main.go:141] libmachine: (ha-797954-m02)   <name>ha-797954-m02</name>
	I0920 21:15:47.617044   29546 main.go:141] libmachine: (ha-797954-m02)   <memory unit='MiB'>2200</memory>
	I0920 21:15:47.617050   29546 main.go:141] libmachine: (ha-797954-m02)   <vcpu>2</vcpu>
	I0920 21:15:47.617057   29546 main.go:141] libmachine: (ha-797954-m02)   <features>
	I0920 21:15:47.617064   29546 main.go:141] libmachine: (ha-797954-m02)     <acpi/>
	I0920 21:15:47.617071   29546 main.go:141] libmachine: (ha-797954-m02)     <apic/>
	I0920 21:15:47.617079   29546 main.go:141] libmachine: (ha-797954-m02)     <pae/>
	I0920 21:15:47.617093   29546 main.go:141] libmachine: (ha-797954-m02)     
	I0920 21:15:47.617100   29546 main.go:141] libmachine: (ha-797954-m02)   </features>
	I0920 21:15:47.617108   29546 main.go:141] libmachine: (ha-797954-m02)   <cpu mode='host-passthrough'>
	I0920 21:15:47.617113   29546 main.go:141] libmachine: (ha-797954-m02)   
	I0920 21:15:47.617117   29546 main.go:141] libmachine: (ha-797954-m02)   </cpu>
	I0920 21:15:47.617121   29546 main.go:141] libmachine: (ha-797954-m02)   <os>
	I0920 21:15:47.617128   29546 main.go:141] libmachine: (ha-797954-m02)     <type>hvm</type>
	I0920 21:15:47.617133   29546 main.go:141] libmachine: (ha-797954-m02)     <boot dev='cdrom'/>
	I0920 21:15:47.617140   29546 main.go:141] libmachine: (ha-797954-m02)     <boot dev='hd'/>
	I0920 21:15:47.617147   29546 main.go:141] libmachine: (ha-797954-m02)     <bootmenu enable='no'/>
	I0920 21:15:47.617156   29546 main.go:141] libmachine: (ha-797954-m02)   </os>
	I0920 21:15:47.617173   29546 main.go:141] libmachine: (ha-797954-m02)   <devices>
	I0920 21:15:47.617187   29546 main.go:141] libmachine: (ha-797954-m02)     <disk type='file' device='cdrom'>
	I0920 21:15:47.617195   29546 main.go:141] libmachine: (ha-797954-m02)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/boot2docker.iso'/>
	I0920 21:15:47.617200   29546 main.go:141] libmachine: (ha-797954-m02)       <target dev='hdc' bus='scsi'/>
	I0920 21:15:47.617205   29546 main.go:141] libmachine: (ha-797954-m02)       <readonly/>
	I0920 21:15:47.617212   29546 main.go:141] libmachine: (ha-797954-m02)     </disk>
	I0920 21:15:47.617217   29546 main.go:141] libmachine: (ha-797954-m02)     <disk type='file' device='disk'>
	I0920 21:15:47.617226   29546 main.go:141] libmachine: (ha-797954-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 21:15:47.617233   29546 main.go:141] libmachine: (ha-797954-m02)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/ha-797954-m02.rawdisk'/>
	I0920 21:15:47.617247   29546 main.go:141] libmachine: (ha-797954-m02)       <target dev='hda' bus='virtio'/>
	I0920 21:15:47.617263   29546 main.go:141] libmachine: (ha-797954-m02)     </disk>
	I0920 21:15:47.617275   29546 main.go:141] libmachine: (ha-797954-m02)     <interface type='network'>
	I0920 21:15:47.617287   29546 main.go:141] libmachine: (ha-797954-m02)       <source network='mk-ha-797954'/>
	I0920 21:15:47.617297   29546 main.go:141] libmachine: (ha-797954-m02)       <model type='virtio'/>
	I0920 21:15:47.617304   29546 main.go:141] libmachine: (ha-797954-m02)     </interface>
	I0920 21:15:47.617314   29546 main.go:141] libmachine: (ha-797954-m02)     <interface type='network'>
	I0920 21:15:47.617325   29546 main.go:141] libmachine: (ha-797954-m02)       <source network='default'/>
	I0920 21:15:47.617331   29546 main.go:141] libmachine: (ha-797954-m02)       <model type='virtio'/>
	I0920 21:15:47.617336   29546 main.go:141] libmachine: (ha-797954-m02)     </interface>
	I0920 21:15:47.617350   29546 main.go:141] libmachine: (ha-797954-m02)     <serial type='pty'>
	I0920 21:15:47.617362   29546 main.go:141] libmachine: (ha-797954-m02)       <target port='0'/>
	I0920 21:15:47.617368   29546 main.go:141] libmachine: (ha-797954-m02)     </serial>
	I0920 21:15:47.617379   29546 main.go:141] libmachine: (ha-797954-m02)     <console type='pty'>
	I0920 21:15:47.617391   29546 main.go:141] libmachine: (ha-797954-m02)       <target type='serial' port='0'/>
	I0920 21:15:47.617402   29546 main.go:141] libmachine: (ha-797954-m02)     </console>
	I0920 21:15:47.617411   29546 main.go:141] libmachine: (ha-797954-m02)     <rng model='virtio'>
	I0920 21:15:47.617441   29546 main.go:141] libmachine: (ha-797954-m02)       <backend model='random'>/dev/random</backend>
	I0920 21:15:47.617464   29546 main.go:141] libmachine: (ha-797954-m02)     </rng>
	I0920 21:15:47.617472   29546 main.go:141] libmachine: (ha-797954-m02)     
	I0920 21:15:47.617481   29546 main.go:141] libmachine: (ha-797954-m02)     
	I0920 21:15:47.617489   29546 main.go:141] libmachine: (ha-797954-m02)   </devices>
	I0920 21:15:47.617497   29546 main.go:141] libmachine: (ha-797954-m02) </domain>
	I0920 21:15:47.617508   29546 main.go:141] libmachine: (ha-797954-m02) 
	I0920 21:15:47.624001   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:b4:c0:b8 in network default
	I0920 21:15:47.624558   29546 main.go:141] libmachine: (ha-797954-m02) Ensuring networks are active...
	I0920 21:15:47.624572   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:47.625191   29546 main.go:141] libmachine: (ha-797954-m02) Ensuring network default is active
	I0920 21:15:47.625499   29546 main.go:141] libmachine: (ha-797954-m02) Ensuring network mk-ha-797954 is active
	I0920 21:15:47.625846   29546 main.go:141] libmachine: (ha-797954-m02) Getting domain xml...
	I0920 21:15:47.626575   29546 main.go:141] libmachine: (ha-797954-m02) Creating domain...
	I0920 21:15:48.860761   29546 main.go:141] libmachine: (ha-797954-m02) Waiting to get IP...
	I0920 21:15:48.861461   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:48.861850   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:48.861881   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:48.861842   29933 retry.go:31] will retry after 246.176053ms: waiting for machine to come up
	I0920 21:15:49.109266   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:49.109702   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:49.109741   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:49.109653   29933 retry.go:31] will retry after 305.989794ms: waiting for machine to come up
	I0920 21:15:49.417320   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:49.417758   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:49.417775   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:49.417722   29933 retry.go:31] will retry after 399.439851ms: waiting for machine to come up
	I0920 21:15:49.818420   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:49.818870   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:49.818890   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:49.818832   29933 retry.go:31] will retry after 366.677131ms: waiting for machine to come up
	I0920 21:15:50.187440   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:50.187949   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:50.187984   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:50.187899   29933 retry.go:31] will retry after 641.811509ms: waiting for machine to come up
	I0920 21:15:50.831780   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:50.832244   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:50.832266   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:50.832186   29933 retry.go:31] will retry after 849.351875ms: waiting for machine to come up
	I0920 21:15:51.683009   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:51.683497   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:51.683523   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:51.683449   29933 retry.go:31] will retry after 880.265456ms: waiting for machine to come up
	I0920 21:15:52.564861   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:52.565353   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:52.565377   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:52.565317   29933 retry.go:31] will retry after 1.175823596s: waiting for machine to come up
	I0920 21:15:53.742596   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:53.743093   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:53.743120   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:53.743040   29933 retry.go:31] will retry after 1.383827805s: waiting for machine to come up
	I0920 21:15:55.128520   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:55.129000   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:55.129028   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:55.128941   29933 retry.go:31] will retry after 2.175983574s: waiting for machine to come up
	I0920 21:15:57.306647   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:57.307130   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:57.307166   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:57.307067   29933 retry.go:31] will retry after 2.423186714s: waiting for machine to come up
	I0920 21:15:59.731527   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:59.731911   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:59.731934   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:59.731876   29933 retry.go:31] will retry after 3.330564372s: waiting for machine to come up
	I0920 21:16:03.063640   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:03.063984   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:16:03.064008   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:16:03.063942   29933 retry.go:31] will retry after 4.208745782s: waiting for machine to come up
	I0920 21:16:07.274472   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:07.274913   29546 main.go:141] libmachine: (ha-797954-m02) Found IP for machine: 192.168.39.63
	I0920 21:16:07.274942   29546 main.go:141] libmachine: (ha-797954-m02) Reserving static IP address...
	I0920 21:16:07.274956   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has current primary IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:07.275334   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find host DHCP lease matching {name: "ha-797954-m02", mac: "52:54:00:53:e9:69", ip: "192.168.39.63"} in network mk-ha-797954
	I0920 21:16:07.352055   29546 main.go:141] libmachine: (ha-797954-m02) Reserved static IP address: 192.168.39.63
	I0920 21:16:07.352089   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Getting to WaitForSSH function...
	I0920 21:16:07.352098   29546 main.go:141] libmachine: (ha-797954-m02) Waiting for SSH to be available...
	I0920 21:16:07.354681   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:07.355003   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954
	I0920 21:16:07.355033   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find defined IP address of network mk-ha-797954 interface with MAC address 52:54:00:53:e9:69
	I0920 21:16:07.355155   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using SSH client type: external
	I0920 21:16:07.355181   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa (-rw-------)
	I0920 21:16:07.355212   29546 main.go:141] libmachine: (ha-797954-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:16:07.355231   29546 main.go:141] libmachine: (ha-797954-m02) DBG | About to run SSH command:
	I0920 21:16:07.355247   29546 main.go:141] libmachine: (ha-797954-m02) DBG | exit 0
	I0920 21:16:07.358863   29546 main.go:141] libmachine: (ha-797954-m02) DBG | SSH cmd err, output: exit status 255: 
	I0920 21:16:07.358889   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0920 21:16:07.358899   29546 main.go:141] libmachine: (ha-797954-m02) DBG | command : exit 0
	I0920 21:16:07.358910   29546 main.go:141] libmachine: (ha-797954-m02) DBG | err     : exit status 255
	I0920 21:16:07.358922   29546 main.go:141] libmachine: (ha-797954-m02) DBG | output  : 
	I0920 21:16:10.359142   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Getting to WaitForSSH function...
	I0920 21:16:10.362209   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.362612   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.362643   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.362812   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using SSH client type: external
	I0920 21:16:10.362833   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa (-rw-------)
	I0920 21:16:10.362856   29546 main.go:141] libmachine: (ha-797954-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:16:10.362864   29546 main.go:141] libmachine: (ha-797954-m02) DBG | About to run SSH command:
	I0920 21:16:10.362874   29546 main.go:141] libmachine: (ha-797954-m02) DBG | exit 0
	I0920 21:16:10.491233   29546 main.go:141] libmachine: (ha-797954-m02) DBG | SSH cmd err, output: <nil>: 
	I0920 21:16:10.491477   29546 main.go:141] libmachine: (ha-797954-m02) KVM machine creation complete!
	I0920 21:16:10.491754   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetConfigRaw
	I0920 21:16:10.492323   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:10.492507   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:10.492661   29546 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 21:16:10.492673   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetState
	I0920 21:16:10.493804   29546 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 21:16:10.493816   29546 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 21:16:10.493821   29546 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 21:16:10.493826   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.495917   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.496311   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.496341   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.496456   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:10.496641   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.496775   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.496906   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:10.497045   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:10.497249   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:10.497260   29546 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 21:16:10.606330   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:16:10.606353   29546 main.go:141] libmachine: Detecting the provisioner...
	I0920 21:16:10.606360   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.608723   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.609117   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.609139   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.609298   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:10.609503   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.609652   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.609759   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:10.609876   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:10.610071   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:10.610085   29546 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 21:16:10.724120   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 21:16:10.724202   29546 main.go:141] libmachine: found compatible host: buildroot
	I0920 21:16:10.724212   29546 main.go:141] libmachine: Provisioning with buildroot...
	I0920 21:16:10.724219   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetMachineName
	I0920 21:16:10.724500   29546 buildroot.go:166] provisioning hostname "ha-797954-m02"
	I0920 21:16:10.724528   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetMachineName
	I0920 21:16:10.724726   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.727578   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.728013   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.728055   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.728155   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:10.728348   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.728482   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.728598   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:10.728757   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:10.728927   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:10.728942   29546 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-797954-m02 && echo "ha-797954-m02" | sudo tee /etc/hostname
	I0920 21:16:10.853869   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-797954-m02
	
	I0920 21:16:10.853893   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.856573   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.856928   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.856950   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.857120   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:10.857298   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.857491   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.857632   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:10.857799   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:10.858001   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:10.858027   29546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-797954-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-797954-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-797954-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 21:16:10.976857   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:16:10.976894   29546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 21:16:10.976921   29546 buildroot.go:174] setting up certificates
	I0920 21:16:10.976930   29546 provision.go:84] configureAuth start
	I0920 21:16:10.976939   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetMachineName
	I0920 21:16:10.977193   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetIP
	I0920 21:16:10.979787   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.980127   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.980156   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.980310   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.982562   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.982952   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.982987   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.983144   29546 provision.go:143] copyHostCerts
	I0920 21:16:10.983172   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:16:10.983211   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 21:16:10.983220   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:16:10.983297   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 21:16:10.983390   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:16:10.983415   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 21:16:10.983423   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:16:10.983457   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 21:16:10.983516   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:16:10.983538   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 21:16:10.983547   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:16:10.983578   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 21:16:10.983645   29546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.ha-797954-m02 san=[127.0.0.1 192.168.39.63 ha-797954-m02 localhost minikube]
	I0920 21:16:11.277389   29546 provision.go:177] copyRemoteCerts
	I0920 21:16:11.277450   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 21:16:11.277472   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.280305   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.280630   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.280660   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.280787   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.280985   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.281128   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.281262   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa Username:docker}
	I0920 21:16:11.365413   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 21:16:11.365500   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 21:16:11.392730   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 21:16:11.392804   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 21:16:11.418158   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 21:16:11.418231   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 21:16:11.442606   29546 provision.go:87] duration metric: took 465.662046ms to configureAuth
	I0920 21:16:11.442634   29546 buildroot.go:189] setting minikube options for container-runtime
	I0920 21:16:11.442797   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:16:11.442863   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.445626   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.446004   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.446037   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.446193   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.446437   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.446574   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.446682   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.446829   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:11.447016   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:11.447037   29546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 21:16:11.680912   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 21:16:11.680937   29546 main.go:141] libmachine: Checking connection to Docker...
	I0920 21:16:11.680947   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetURL
	I0920 21:16:11.682272   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using libvirt version 6000000
	I0920 21:16:11.684914   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.685189   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.685217   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.685358   29546 main.go:141] libmachine: Docker is up and running!
	I0920 21:16:11.685371   29546 main.go:141] libmachine: Reticulating splines...
	I0920 21:16:11.685378   29546 client.go:171] duration metric: took 24.476526726s to LocalClient.Create
	I0920 21:16:11.685403   29546 start.go:167] duration metric: took 24.47659369s to libmachine.API.Create "ha-797954"
	I0920 21:16:11.685417   29546 start.go:293] postStartSetup for "ha-797954-m02" (driver="kvm2")
	I0920 21:16:11.685429   29546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 21:16:11.685450   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.685687   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 21:16:11.685714   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.687637   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.687978   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.688009   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.688123   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.688289   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.688429   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.688559   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa Username:docker}
	I0920 21:16:11.773487   29546 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 21:16:11.777926   29546 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 21:16:11.777968   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 21:16:11.778045   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 21:16:11.778142   29546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 21:16:11.778153   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /etc/ssl/certs/167852.pem
	I0920 21:16:11.778236   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 21:16:11.787777   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:16:11.814167   29546 start.go:296] duration metric: took 128.737842ms for postStartSetup
	I0920 21:16:11.814211   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetConfigRaw
	I0920 21:16:11.814756   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetIP
	I0920 21:16:11.817446   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.817813   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.817843   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.818091   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:16:11.818271   29546 start.go:128] duration metric: took 24.627729213s to createHost
	I0920 21:16:11.818293   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.820157   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.820420   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.820457   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.820597   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.820754   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.820876   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.820975   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.821154   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:11.821361   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:11.821372   29546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 21:16:11.935903   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726866971.911740237
	
	I0920 21:16:11.935925   29546 fix.go:216] guest clock: 1726866971.911740237
	I0920 21:16:11.935931   29546 fix.go:229] Guest: 2024-09-20 21:16:11.911740237 +0000 UTC Remote: 2024-09-20 21:16:11.818282234 +0000 UTC m=+73.759482878 (delta=93.458003ms)
	I0920 21:16:11.935947   29546 fix.go:200] guest clock delta is within tolerance: 93.458003ms
	I0920 21:16:11.935951   29546 start.go:83] releasing machines lock for "ha-797954-m02", held for 24.74549273s
	I0920 21:16:11.935977   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.936247   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetIP
	I0920 21:16:11.938751   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.939097   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.939121   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.941536   29546 out.go:177] * Found network options:
	I0920 21:16:11.943384   29546 out.go:177]   - NO_PROXY=192.168.39.240
	W0920 21:16:11.944703   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 21:16:11.944730   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.945270   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.945436   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.945528   29546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 21:16:11.945565   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	W0920 21:16:11.945595   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 21:16:11.945670   29546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 21:16:11.945694   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.948268   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.948343   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.948669   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.948696   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.948719   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.948739   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.948891   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.948988   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.949074   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.949141   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.949205   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.949260   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.949324   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa Username:docker}
	I0920 21:16:11.949401   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa Username:docker}
	I0920 21:16:12.182357   29546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 21:16:12.188336   29546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 21:16:12.188410   29546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 21:16:12.207463   29546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 21:16:12.207490   29546 start.go:495] detecting cgroup driver to use...
	I0920 21:16:12.207553   29546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 21:16:12.224528   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 21:16:12.239330   29546 docker.go:217] disabling cri-docker service (if available) ...
	I0920 21:16:12.239398   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 21:16:12.254085   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 21:16:12.268825   29546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 21:16:12.382055   29546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 21:16:12.535413   29546 docker.go:233] disabling docker service ...
	I0920 21:16:12.535492   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 21:16:12.551349   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 21:16:12.564334   29546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 21:16:12.703477   29546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 21:16:12.833908   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 21:16:12.848370   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 21:16:12.867636   29546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 21:16:12.867695   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.878698   29546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 21:16:12.878757   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.889749   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.900887   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.911745   29546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 21:16:12.922758   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.933646   29546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.951100   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.961869   29546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 21:16:12.971480   29546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 21:16:12.971539   29546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 21:16:12.984107   29546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 21:16:12.993914   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:16:13.101378   29546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 21:16:13.200043   29546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 21:16:13.200113   29546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 21:16:13.204910   29546 start.go:563] Will wait 60s for crictl version
	I0920 21:16:13.204975   29546 ssh_runner.go:195] Run: which crictl
	I0920 21:16:13.208759   29546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 21:16:13.247360   29546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 21:16:13.247450   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:16:13.276920   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:16:13.308600   29546 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 21:16:13.310088   29546 out.go:177]   - env NO_PROXY=192.168.39.240
	I0920 21:16:13.311453   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetIP
	I0920 21:16:13.313821   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:13.314157   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:13.314177   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:13.314359   29546 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 21:16:13.318746   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:16:13.331582   29546 mustload.go:65] Loading cluster: ha-797954
	I0920 21:16:13.331825   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:16:13.332085   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:16:13.332122   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:16:13.347535   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0920 21:16:13.348063   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:16:13.348528   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:16:13.348548   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:16:13.348889   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:16:13.349049   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:16:13.350568   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:16:13.350840   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:16:13.350872   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:16:13.365373   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I0920 21:16:13.365792   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:16:13.366261   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:16:13.366281   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:16:13.366592   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:16:13.366760   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:16:13.366907   29546 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954 for IP: 192.168.39.63
	I0920 21:16:13.366920   29546 certs.go:194] generating shared ca certs ...
	I0920 21:16:13.366937   29546 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:16:13.367097   29546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 21:16:13.367133   29546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 21:16:13.367142   29546 certs.go:256] generating profile certs ...
	I0920 21:16:13.367211   29546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key
	I0920 21:16:13.367235   29546 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.6519369c
	I0920 21:16:13.367251   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.6519369c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.63 192.168.39.254]
	I0920 21:16:13.455682   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.6519369c ...
	I0920 21:16:13.455713   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.6519369c: {Name:mkf7fe9b668446e8a4052aa048fbbf7677e2fbab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:16:13.455914   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.6519369c ...
	I0920 21:16:13.455929   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.6519369c: {Name:mk7e6a911b0af488356d39a10736e055b37e6a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:16:13.456006   29546 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.6519369c -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt
	I0920 21:16:13.456131   29546 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.6519369c -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key
	I0920 21:16:13.456249   29546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key
	I0920 21:16:13.456263   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 21:16:13.456276   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 21:16:13.456287   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 21:16:13.456297   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 21:16:13.456308   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 21:16:13.456317   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 21:16:13.456327   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 21:16:13.456340   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 21:16:13.456384   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 21:16:13.456413   29546 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 21:16:13.456423   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 21:16:13.456445   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 21:16:13.456466   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 21:16:13.456487   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 21:16:13.456523   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:16:13.456549   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /usr/share/ca-certificates/167852.pem
	I0920 21:16:13.456563   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:16:13.456574   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem -> /usr/share/ca-certificates/16785.pem
	I0920 21:16:13.456603   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:16:13.459546   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:16:13.459920   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:16:13.459938   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:16:13.460181   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:16:13.460348   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:16:13.460507   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:16:13.460594   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:16:13.535461   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 21:16:13.541124   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 21:16:13.556682   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 21:16:13.564013   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0920 21:16:13.575182   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 21:16:13.579951   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 21:16:13.591192   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 21:16:13.595498   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0920 21:16:13.606893   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 21:16:13.611454   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 21:16:13.622000   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 21:16:13.627844   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0920 21:16:13.640161   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 21:16:13.667082   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 21:16:13.691675   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 21:16:13.718752   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 21:16:13.742806   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 21:16:13.767051   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 21:16:13.790981   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 21:16:13.815475   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 21:16:13.840035   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 21:16:13.864033   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 21:16:13.887540   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 21:16:13.912601   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 21:16:13.929455   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0920 21:16:13.946232   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 21:16:13.962927   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0920 21:16:13.979514   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 21:16:13.996480   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0920 21:16:14.013458   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 21:16:14.030697   29546 ssh_runner.go:195] Run: openssl version
	I0920 21:16:14.036567   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 21:16:14.047523   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 21:16:14.052253   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:16:14.052312   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 21:16:14.058027   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 21:16:14.068874   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 21:16:14.080054   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:16:14.084884   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:16:14.084969   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:16:14.091061   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 21:16:14.102348   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 21:16:14.113259   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 21:16:14.117584   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:16:14.117628   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 21:16:14.123165   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 21:16:14.133845   29546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:16:14.137859   29546 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 21:16:14.137916   29546 kubeadm.go:934] updating node {m02 192.168.39.63 8443 v1.31.1 crio true true} ...
	I0920 21:16:14.138011   29546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-797954-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 21:16:14.138044   29546 kube-vip.go:115] generating kube-vip config ...
	I0920 21:16:14.138082   29546 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 21:16:14.157200   29546 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 21:16:14.157256   29546 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 21:16:14.157308   29546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 21:16:14.167159   29546 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 21:16:14.167253   29546 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 21:16:14.176775   29546 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 21:16:14.176800   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 21:16:14.176860   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 21:16:14.176869   29546 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0920 21:16:14.176896   29546 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0920 21:16:14.181293   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 21:16:14.181317   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 21:16:14.830604   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 21:16:14.830707   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 21:16:14.836995   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 21:16:14.837029   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 21:16:14.885458   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:16:14.921200   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 21:16:14.921297   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 21:16:14.928788   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 21:16:14.928843   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 21:16:15.405307   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 21:16:15.415161   29546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 21:16:15.432203   29546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 21:16:15.449139   29546 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 21:16:15.466408   29546 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 21:16:15.470447   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:16:15.483385   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:16:15.605870   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:16:15.622726   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:16:15.623063   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:16:15.623100   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:16:15.638072   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45517
	I0920 21:16:15.638612   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:16:15.639116   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:16:15.639140   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:16:15.639467   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:16:15.639639   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:16:15.639767   29546 start.go:317] joinCluster: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:16:15.639881   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 21:16:15.639903   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:16:15.642798   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:16:15.643197   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:16:15.643225   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:16:15.643351   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:16:15.643513   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:16:15.643635   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:16:15.643759   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:16:15.792657   29546 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:16:15.792709   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kdqr52.0bg1ag3l84e32pbu --discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-797954-m02 --control-plane --apiserver-advertise-address=192.168.39.63 --apiserver-bind-port=8443"
	I0920 21:16:37.363850   29546 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kdqr52.0bg1ag3l84e32pbu --discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-797954-m02 --control-plane --apiserver-advertise-address=192.168.39.63 --apiserver-bind-port=8443": (21.571119677s)
	I0920 21:16:37.363880   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 21:16:37.826315   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-797954-m02 minikube.k8s.io/updated_at=2024_09_20T21_16_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=ha-797954 minikube.k8s.io/primary=false
	I0920 21:16:37.965429   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-797954-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 21:16:38.074454   29546 start.go:319] duration metric: took 22.434680822s to joinCluster
	I0920 21:16:38.074530   29546 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:16:38.074847   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:16:38.076237   29546 out.go:177] * Verifying Kubernetes components...
	I0920 21:16:38.077911   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:16:38.297124   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:16:38.315206   29546 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:16:38.315425   29546 kapi.go:59] client config for ha-797954: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key", CAFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 21:16:38.315479   29546 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.240:8443
	I0920 21:16:38.315667   29546 node_ready.go:35] waiting up to 6m0s for node "ha-797954-m02" to be "Ready" ...
	I0920 21:16:38.315752   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:38.315761   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:38.315768   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:38.315773   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:38.327510   29546 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0920 21:16:38.816509   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:38.816531   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:38.816539   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:38.816543   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:38.820223   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:39.316657   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:39.316678   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:39.316687   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:39.316692   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:39.320207   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:39.816039   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:39.816063   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:39.816076   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:39.816081   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:39.819353   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:40.316075   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:40.316098   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:40.316106   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:40.316111   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:40.319539   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:40.320391   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:40.816105   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:40.816128   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:40.816139   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:40.816143   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:40.819586   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:41.316211   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:41.316235   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:41.316245   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:41.316249   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:41.349189   29546 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0920 21:16:41.816015   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:41.816038   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:41.816046   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:41.816049   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:41.819036   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:42.316442   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:42.316466   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:42.316476   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:42.316482   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:42.319998   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:42.320895   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:42.816283   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:42.816308   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:42.816320   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:42.816325   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:42.819743   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:43.316284   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:43.316312   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:43.316323   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:43.316329   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:43.319785   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:43.816709   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:43.816731   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:43.816742   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:43.816747   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:43.820871   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:16:44.316171   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:44.316193   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:44.316204   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:44.316208   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:44.319512   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:44.816802   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:44.816826   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:44.816834   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:44.816839   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:44.820164   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:44.820813   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:45.315957   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:45.315983   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:45.315990   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:45.315994   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:45.322831   29546 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 21:16:45.816465   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:45.816487   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:45.816493   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:45.816499   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:45.820400   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:46.316676   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:46.316696   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:46.316705   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:46.316710   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:46.320533   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:46.816215   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:46.816236   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:46.816245   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:46.816250   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:46.820038   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:47.316157   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:47.316175   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:47.316182   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:47.316187   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:47.319007   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:47.319706   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:47.816061   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:47.816084   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:47.816093   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:47.816099   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:47.819550   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:48.316265   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:48.316297   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:48.316309   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:48.316315   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:48.324229   29546 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 21:16:48.816174   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:48.816197   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:48.816207   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:48.816212   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:48.819164   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:49.316193   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:49.316219   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:49.316227   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:49.316231   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:49.319642   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:49.320250   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:49.816604   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:49.816629   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:49.816639   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:49.816643   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:49.820105   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:50.316008   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:50.316032   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:50.316043   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:50.316050   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:50.320037   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:50.816542   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:50.816569   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:50.816578   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:50.816584   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:50.819913   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:51.316794   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:51.316833   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:51.316842   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:51.316847   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:51.320154   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:51.320874   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:51.816190   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:51.816232   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:51.816240   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:51.816245   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:51.819606   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:52.316313   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:52.316334   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:52.316342   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:52.316345   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:52.319035   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:52.815947   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:52.815976   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:52.815984   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:52.815989   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:52.819657   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:53.316260   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:53.316286   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:53.316296   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:53.316302   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:53.319552   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:53.816326   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:53.816347   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:53.816357   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:53.816361   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:53.820253   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:53.820949   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:54.316293   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:54.316325   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:54.316345   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:54.316351   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:54.320156   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:54.816089   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:54.816111   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:54.816119   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:54.816122   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:54.819881   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:55.316810   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:55.316833   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:55.316841   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:55.316844   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:55.320308   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:55.816768   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:55.816797   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:55.816808   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:55.816814   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:55.820631   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:55.821331   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:56.315901   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:56.315944   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:56.315956   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:56.315969   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:56.320010   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:16:56.816119   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:56.816140   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:56.816154   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:56.816158   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:56.820007   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:57.316463   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:57.316490   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:57.316501   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:57.316509   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:57.320023   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:57.815912   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:57.815949   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:57.815957   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:57.815962   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:57.819365   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:58.316195   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:58.316218   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.316226   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.316230   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.320091   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:58.320848   29546 node_ready.go:49] node "ha-797954-m02" has status "Ready":"True"
	I0920 21:16:58.320865   29546 node_ready.go:38] duration metric: took 20.005171668s for node "ha-797954-m02" to be "Ready" ...
	I0920 21:16:58.320873   29546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 21:16:58.320944   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:16:58.320955   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.320962   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.320966   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.326211   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:16:58.332482   29546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.332585   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bkcf8
	I0920 21:16:58.332594   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.332601   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.332605   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.336131   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:58.337007   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.337022   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.337031   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.337034   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.339737   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.340269   29546 pod_ready.go:93] pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.340285   29546 pod_ready.go:82] duration metric: took 7.773484ms for pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.340294   29546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.340345   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-djhfr
	I0920 21:16:58.340352   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.340359   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.340363   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.343186   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.343740   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.343754   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.343761   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.343764   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.346434   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.347695   29546 pod_ready.go:93] pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.347713   29546 pod_ready.go:82] duration metric: took 7.413319ms for pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.347723   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.347819   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954
	I0920 21:16:58.347828   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.347835   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.347838   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.350799   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.351620   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.351636   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.351645   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.351650   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.354248   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.354637   29546 pod_ready.go:93] pod "etcd-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.354653   29546 pod_ready.go:82] duration metric: took 6.924523ms for pod "etcd-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.354662   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.354733   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954-m02
	I0920 21:16:58.354740   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.354747   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.354750   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.357249   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.357873   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:58.357891   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.357902   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.357909   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.360353   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.360737   29546 pod_ready.go:93] pod "etcd-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.360754   29546 pod_ready.go:82] duration metric: took 6.065117ms for pod "etcd-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.360772   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.517254   29546 request.go:632] Waited for 156.41306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954
	I0920 21:16:58.517351   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954
	I0920 21:16:58.517362   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.517375   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.517386   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.520791   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:58.716729   29546 request.go:632] Waited for 195.35622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.716806   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.716813   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.716825   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.716832   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.720994   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:16:58.721498   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.721522   29546 pod_ready.go:82] duration metric: took 360.733755ms for pod "kube-apiserver-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.721535   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.916778   29546 request.go:632] Waited for 195.15835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m02
	I0920 21:16:58.916853   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m02
	I0920 21:16:58.916860   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.916871   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.916879   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.920176   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.116231   29546 request.go:632] Waited for 195.32753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:59.116326   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:59.116335   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.116348   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.116363   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.120233   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.120679   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:59.120701   29546 pod_ready.go:82] duration metric: took 399.159254ms for pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.120711   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.316770   29546 request.go:632] Waited for 195.998601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954
	I0920 21:16:59.316829   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954
	I0920 21:16:59.316835   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.316843   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.316848   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.320380   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.516308   29546 request.go:632] Waited for 195.285053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:59.516365   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:59.516370   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.516385   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.516408   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.519762   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.520332   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:59.520354   29546 pod_ready.go:82] duration metric: took 399.636336ms for pod "kube-controller-manager-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.520366   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.716282   29546 request.go:632] Waited for 195.838293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m02
	I0920 21:16:59.716348   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m02
	I0920 21:16:59.716355   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.716364   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.716369   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.719757   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.916877   29546 request.go:632] Waited for 196.253642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:59.916926   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:59.916930   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.916941   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.916954   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.920360   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.920904   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:59.920927   29546 pod_ready.go:82] duration metric: took 400.552955ms for pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.920940   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2pjxq" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.116981   29546 request.go:632] Waited for 195.965997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pjxq
	I0920 21:17:00.117046   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pjxq
	I0920 21:17:00.117052   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.117059   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.117064   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.120477   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:00.316320   29546 request.go:632] Waited for 195.169192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:17:00.316423   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:17:00.316431   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.316441   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.316453   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.319724   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:00.320229   29546 pod_ready.go:93] pod "kube-proxy-2pjxq" in "kube-system" namespace has status "Ready":"True"
	I0920 21:17:00.320259   29546 pod_ready.go:82] duration metric: took 399.311666ms for pod "kube-proxy-2pjxq" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.320270   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qx9ph" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.516281   29546 request.go:632] Waited for 195.931067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qx9ph
	I0920 21:17:00.516373   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qx9ph
	I0920 21:17:00.516383   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.516394   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.516403   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.521923   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:17:00.716927   29546 request.go:632] Waited for 194.244639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:17:00.716994   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:17:00.717002   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.717011   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.717027   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.720660   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:00.721230   29546 pod_ready.go:93] pod "kube-proxy-qx9ph" in "kube-system" namespace has status "Ready":"True"
	I0920 21:17:00.721249   29546 pod_ready.go:82] duration metric: took 400.97344ms for pod "kube-proxy-qx9ph" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.721259   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.916498   29546 request.go:632] Waited for 195.181307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954
	I0920 21:17:00.916573   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954
	I0920 21:17:00.916580   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.916588   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.916592   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.920596   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:01.116571   29546 request.go:632] Waited for 195.40032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:17:01.116628   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:17:01.116634   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.116641   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.116643   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.119984   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:01.120450   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:17:01.120470   29546 pod_ready.go:82] duration metric: took 399.204122ms for pod "kube-scheduler-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:01.120483   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:01.316631   29546 request.go:632] Waited for 196.084204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m02
	I0920 21:17:01.316697   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m02
	I0920 21:17:01.316703   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.316710   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.316712   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.320693   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:01.516582   29546 request.go:632] Waited for 195.353132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:17:01.516664   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:17:01.516673   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.516684   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.516692   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.519980   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:01.520627   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:17:01.520645   29546 pod_ready.go:82] duration metric: took 400.154808ms for pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:01.520658   29546 pod_ready.go:39] duration metric: took 3.19977376s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 21:17:01.520675   29546 api_server.go:52] waiting for apiserver process to appear ...
	I0920 21:17:01.520730   29546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 21:17:01.537086   29546 api_server.go:72] duration metric: took 23.462517508s to wait for apiserver process to appear ...
	I0920 21:17:01.537114   29546 api_server.go:88] waiting for apiserver healthz status ...
	I0920 21:17:01.537136   29546 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0920 21:17:01.541714   29546 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0920 21:17:01.541802   29546 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0920 21:17:01.541813   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.541825   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.541834   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.542686   29546 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 21:17:01.542806   29546 api_server.go:141] control plane version: v1.31.1
	I0920 21:17:01.542827   29546 api_server.go:131] duration metric: took 5.705549ms to wait for apiserver health ...
	I0920 21:17:01.542841   29546 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 21:17:01.717246   29546 request.go:632] Waited for 174.332661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:17:01.717348   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:17:01.717358   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.717367   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.717379   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.722159   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:01.726592   29546 system_pods.go:59] 17 kube-system pods found
	I0920 21:17:01.726620   29546 system_pods.go:61] "coredns-7c65d6cfc9-bkcf8" [5d5aca96-de3a-4154-b195-b69aa45e2cfc] Running
	I0920 21:17:01.726625   29546 system_pods.go:61] "coredns-7c65d6cfc9-djhfr" [a8ca35db-3f4c-4e4c-a9e4-e12abecff539] Running
	I0920 21:17:01.726629   29546 system_pods.go:61] "etcd-ha-797954" [ce647dfd-9ff8-483e-ac64-e6cbed777dfa] Running
	I0920 21:17:01.726632   29546 system_pods.go:61] "etcd-ha-797954-m02" [d8c6a164-a169-482f-a05f-0e723b947424] Running
	I0920 21:17:01.726636   29546 system_pods.go:61] "kindnet-6lk2f" [ddd6c393-68ce-4fff-9f0d-6e0147e03926] Running
	I0920 21:17:01.726639   29546 system_pods.go:61] "kindnet-qh5rt" [2036234c-15c7-43da-ac52-cdac77f1a721] Running
	I0920 21:17:01.726642   29546 system_pods.go:61] "kube-apiserver-ha-797954" [91b4d438-70c1-4335-a0eb-9d48bd470b37] Running
	I0920 21:17:01.726645   29546 system_pods.go:61] "kube-apiserver-ha-797954-m02" [d0634b88-e668-4be0-b175-543f5f496efa] Running
	I0920 21:17:01.726650   29546 system_pods.go:61] "kube-controller-manager-ha-797954" [7bcf4459-7067-4f10-a79d-a4cd0017fffb] Running
	I0920 21:17:01.726654   29546 system_pods.go:61] "kube-controller-manager-ha-797954-m02" [4366a95b-15ec-42c7-916d-14eb0dba314d] Running
	I0920 21:17:01.726657   29546 system_pods.go:61] "kube-proxy-2pjxq" [bbd40eaa-4d03-491a-96c6-ef4cedc6401a] Running
	I0920 21:17:01.726660   29546 system_pods.go:61] "kube-proxy-qx9ph" [b3d20fb2-8eff-4921-add4-ee36b8fba334] Running
	I0920 21:17:01.726662   29546 system_pods.go:61] "kube-scheduler-ha-797954" [a55760dc-c4e4-4cec-b8d1-04512bd9ad33] Running
	I0920 21:17:01.726665   29546 system_pods.go:61] "kube-scheduler-ha-797954-m02" [4a53607e-2d10-4fcb-8697-bf042189871d] Running
	I0920 21:17:01.726668   29546 system_pods.go:61] "kube-vip-ha-797954" [ad002c23-a891-4bf6-bbb7-bba95f43b1d0] Running
	I0920 21:17:01.726671   29546 system_pods.go:61] "kube-vip-ha-797954-m02" [09b5b6c2-96e7-43f1-a53e-21513fc048ed] Running
	I0920 21:17:01.726674   29546 system_pods.go:61] "storage-provisioner" [05385037-e9ea-4afb-a256-3bfae42e00f5] Running
	I0920 21:17:01.726679   29546 system_pods.go:74] duration metric: took 183.829123ms to wait for pod list to return data ...
	I0920 21:17:01.726686   29546 default_sa.go:34] waiting for default service account to be created ...
	I0920 21:17:01.917125   29546 request.go:632] Waited for 190.363748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0920 21:17:01.917200   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0920 21:17:01.917205   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.917212   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.917217   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.921868   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:01.922074   29546 default_sa.go:45] found service account: "default"
	I0920 21:17:01.922087   29546 default_sa.go:55] duration metric: took 195.393903ms for default service account to be created ...
	I0920 21:17:01.922095   29546 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 21:17:02.116523   29546 request.go:632] Waited for 194.354422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:17:02.116574   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:17:02.116579   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:02.116585   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:02.116590   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:02.121031   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:02.127015   29546 system_pods.go:86] 17 kube-system pods found
	I0920 21:17:02.127040   29546 system_pods.go:89] "coredns-7c65d6cfc9-bkcf8" [5d5aca96-de3a-4154-b195-b69aa45e2cfc] Running
	I0920 21:17:02.127047   29546 system_pods.go:89] "coredns-7c65d6cfc9-djhfr" [a8ca35db-3f4c-4e4c-a9e4-e12abecff539] Running
	I0920 21:17:02.127052   29546 system_pods.go:89] "etcd-ha-797954" [ce647dfd-9ff8-483e-ac64-e6cbed777dfa] Running
	I0920 21:17:02.127056   29546 system_pods.go:89] "etcd-ha-797954-m02" [d8c6a164-a169-482f-a05f-0e723b947424] Running
	I0920 21:17:02.127060   29546 system_pods.go:89] "kindnet-6lk2f" [ddd6c393-68ce-4fff-9f0d-6e0147e03926] Running
	I0920 21:17:02.127064   29546 system_pods.go:89] "kindnet-qh5rt" [2036234c-15c7-43da-ac52-cdac77f1a721] Running
	I0920 21:17:02.127067   29546 system_pods.go:89] "kube-apiserver-ha-797954" [91b4d438-70c1-4335-a0eb-9d48bd470b37] Running
	I0920 21:17:02.127071   29546 system_pods.go:89] "kube-apiserver-ha-797954-m02" [d0634b88-e668-4be0-b175-543f5f496efa] Running
	I0920 21:17:02.127076   29546 system_pods.go:89] "kube-controller-manager-ha-797954" [7bcf4459-7067-4f10-a79d-a4cd0017fffb] Running
	I0920 21:17:02.127079   29546 system_pods.go:89] "kube-controller-manager-ha-797954-m02" [4366a95b-15ec-42c7-916d-14eb0dba314d] Running
	I0920 21:17:02.127082   29546 system_pods.go:89] "kube-proxy-2pjxq" [bbd40eaa-4d03-491a-96c6-ef4cedc6401a] Running
	I0920 21:17:02.127086   29546 system_pods.go:89] "kube-proxy-qx9ph" [b3d20fb2-8eff-4921-add4-ee36b8fba334] Running
	I0920 21:17:02.127089   29546 system_pods.go:89] "kube-scheduler-ha-797954" [a55760dc-c4e4-4cec-b8d1-04512bd9ad33] Running
	I0920 21:17:02.127093   29546 system_pods.go:89] "kube-scheduler-ha-797954-m02" [4a53607e-2d10-4fcb-8697-bf042189871d] Running
	I0920 21:17:02.127096   29546 system_pods.go:89] "kube-vip-ha-797954" [ad002c23-a891-4bf6-bbb7-bba95f43b1d0] Running
	I0920 21:17:02.127099   29546 system_pods.go:89] "kube-vip-ha-797954-m02" [09b5b6c2-96e7-43f1-a53e-21513fc048ed] Running
	I0920 21:17:02.127103   29546 system_pods.go:89] "storage-provisioner" [05385037-e9ea-4afb-a256-3bfae42e00f5] Running
	I0920 21:17:02.127109   29546 system_pods.go:126] duration metric: took 205.01008ms to wait for k8s-apps to be running ...
	I0920 21:17:02.127118   29546 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 21:17:02.127161   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:17:02.143916   29546 system_svc.go:56] duration metric: took 16.788182ms WaitForService to wait for kubelet
	I0920 21:17:02.143952   29546 kubeadm.go:582] duration metric: took 24.069387794s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 21:17:02.143974   29546 node_conditions.go:102] verifying NodePressure condition ...
	I0920 21:17:02.316696   29546 request.go:632] Waited for 172.648744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
	I0920 21:17:02.316747   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0920 21:17:02.316752   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:02.316763   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:02.316768   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:02.320760   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:02.321468   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:17:02.321490   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:17:02.321503   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:17:02.321508   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:17:02.321513   29546 node_conditions.go:105] duration metric: took 177.533781ms to run NodePressure ...
	I0920 21:17:02.321541   29546 start.go:241] waiting for startup goroutines ...
	I0920 21:17:02.321567   29546 start.go:255] writing updated cluster config ...
	I0920 21:17:02.323943   29546 out.go:201] 
	I0920 21:17:02.325378   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:17:02.325468   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:17:02.326982   29546 out.go:177] * Starting "ha-797954-m03" control-plane node in "ha-797954" cluster
	I0920 21:17:02.328317   29546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:17:02.328336   29546 cache.go:56] Caching tarball of preloaded images
	I0920 21:17:02.328441   29546 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 21:17:02.328453   29546 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 21:17:02.328536   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:17:02.328735   29546 start.go:360] acquireMachinesLock for ha-797954-m03: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 21:17:02.328796   29546 start.go:364] duration metric: took 33.998µs to acquireMachinesLock for "ha-797954-m03"
	I0920 21:17:02.328812   29546 start.go:93] Provisioning new machine with config: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:17:02.328901   29546 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0920 21:17:02.330315   29546 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 21:17:02.330414   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:17:02.330461   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:17:02.345778   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0920 21:17:02.346279   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:17:02.346747   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:17:02.346772   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:17:02.347086   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:17:02.347350   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetMachineName
	I0920 21:17:02.347468   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:02.347668   29546 start.go:159] libmachine.API.Create for "ha-797954" (driver="kvm2")
	I0920 21:17:02.347694   29546 client.go:168] LocalClient.Create starting
	I0920 21:17:02.347736   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 21:17:02.347785   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:17:02.347800   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:17:02.347860   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 21:17:02.347881   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:17:02.347891   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:17:02.347908   29546 main.go:141] libmachine: Running pre-create checks...
	I0920 21:17:02.347919   29546 main.go:141] libmachine: (ha-797954-m03) Calling .PreCreateCheck
	I0920 21:17:02.348067   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetConfigRaw
	I0920 21:17:02.348542   29546 main.go:141] libmachine: Creating machine...
	I0920 21:17:02.348556   29546 main.go:141] libmachine: (ha-797954-m03) Calling .Create
	I0920 21:17:02.348764   29546 main.go:141] libmachine: (ha-797954-m03) Creating KVM machine...
	I0920 21:17:02.350052   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found existing default KVM network
	I0920 21:17:02.350134   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found existing private KVM network mk-ha-797954
	I0920 21:17:02.350236   29546 main.go:141] libmachine: (ha-797954-m03) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03 ...
	I0920 21:17:02.350270   29546 main.go:141] libmachine: (ha-797954-m03) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 21:17:02.350391   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:02.350253   30348 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:17:02.350445   29546 main.go:141] libmachine: (ha-797954-m03) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 21:17:02.577042   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:02.576911   30348 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa...
	I0920 21:17:02.643000   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:02.642827   30348 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/ha-797954-m03.rawdisk...
	I0920 21:17:02.643045   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Writing magic tar header
	I0920 21:17:02.643062   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Writing SSH key tar header
	I0920 21:17:02.643075   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:02.642998   30348 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03 ...
	I0920 21:17:02.643166   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03
	I0920 21:17:02.643195   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03 (perms=drwx------)
	I0920 21:17:02.643206   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 21:17:02.643220   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:17:02.643226   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 21:17:02.643312   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 21:17:02.643333   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 21:17:02.643345   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 21:17:02.643356   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins
	I0920 21:17:02.643367   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 21:17:02.643381   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 21:17:02.643392   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 21:17:02.643412   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home
	I0920 21:17:02.643422   29546 main.go:141] libmachine: (ha-797954-m03) Creating domain...
	I0920 21:17:02.643428   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Skipping /home - not owner
	I0920 21:17:02.644451   29546 main.go:141] libmachine: (ha-797954-m03) define libvirt domain using xml: 
	I0920 21:17:02.644469   29546 main.go:141] libmachine: (ha-797954-m03) <domain type='kvm'>
	I0920 21:17:02.644475   29546 main.go:141] libmachine: (ha-797954-m03)   <name>ha-797954-m03</name>
	I0920 21:17:02.644480   29546 main.go:141] libmachine: (ha-797954-m03)   <memory unit='MiB'>2200</memory>
	I0920 21:17:02.644485   29546 main.go:141] libmachine: (ha-797954-m03)   <vcpu>2</vcpu>
	I0920 21:17:02.644491   29546 main.go:141] libmachine: (ha-797954-m03)   <features>
	I0920 21:17:02.644498   29546 main.go:141] libmachine: (ha-797954-m03)     <acpi/>
	I0920 21:17:02.644507   29546 main.go:141] libmachine: (ha-797954-m03)     <apic/>
	I0920 21:17:02.644515   29546 main.go:141] libmachine: (ha-797954-m03)     <pae/>
	I0920 21:17:02.644524   29546 main.go:141] libmachine: (ha-797954-m03)     
	I0920 21:17:02.644531   29546 main.go:141] libmachine: (ha-797954-m03)   </features>
	I0920 21:17:02.644536   29546 main.go:141] libmachine: (ha-797954-m03)   <cpu mode='host-passthrough'>
	I0920 21:17:02.644543   29546 main.go:141] libmachine: (ha-797954-m03)   
	I0920 21:17:02.644553   29546 main.go:141] libmachine: (ha-797954-m03)   </cpu>
	I0920 21:17:02.644561   29546 main.go:141] libmachine: (ha-797954-m03)   <os>
	I0920 21:17:02.644566   29546 main.go:141] libmachine: (ha-797954-m03)     <type>hvm</type>
	I0920 21:17:02.644573   29546 main.go:141] libmachine: (ha-797954-m03)     <boot dev='cdrom'/>
	I0920 21:17:02.644577   29546 main.go:141] libmachine: (ha-797954-m03)     <boot dev='hd'/>
	I0920 21:17:02.644582   29546 main.go:141] libmachine: (ha-797954-m03)     <bootmenu enable='no'/>
	I0920 21:17:02.644588   29546 main.go:141] libmachine: (ha-797954-m03)   </os>
	I0920 21:17:02.644595   29546 main.go:141] libmachine: (ha-797954-m03)   <devices>
	I0920 21:17:02.644606   29546 main.go:141] libmachine: (ha-797954-m03)     <disk type='file' device='cdrom'>
	I0920 21:17:02.644627   29546 main.go:141] libmachine: (ha-797954-m03)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/boot2docker.iso'/>
	I0920 21:17:02.644643   29546 main.go:141] libmachine: (ha-797954-m03)       <target dev='hdc' bus='scsi'/>
	I0920 21:17:02.644649   29546 main.go:141] libmachine: (ha-797954-m03)       <readonly/>
	I0920 21:17:02.644656   29546 main.go:141] libmachine: (ha-797954-m03)     </disk>
	I0920 21:17:02.644661   29546 main.go:141] libmachine: (ha-797954-m03)     <disk type='file' device='disk'>
	I0920 21:17:02.644670   29546 main.go:141] libmachine: (ha-797954-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 21:17:02.644678   29546 main.go:141] libmachine: (ha-797954-m03)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/ha-797954-m03.rawdisk'/>
	I0920 21:17:02.644684   29546 main.go:141] libmachine: (ha-797954-m03)       <target dev='hda' bus='virtio'/>
	I0920 21:17:02.644689   29546 main.go:141] libmachine: (ha-797954-m03)     </disk>
	I0920 21:17:02.644698   29546 main.go:141] libmachine: (ha-797954-m03)     <interface type='network'>
	I0920 21:17:02.644710   29546 main.go:141] libmachine: (ha-797954-m03)       <source network='mk-ha-797954'/>
	I0920 21:17:02.644724   29546 main.go:141] libmachine: (ha-797954-m03)       <model type='virtio'/>
	I0920 21:17:02.644734   29546 main.go:141] libmachine: (ha-797954-m03)     </interface>
	I0920 21:17:02.644742   29546 main.go:141] libmachine: (ha-797954-m03)     <interface type='network'>
	I0920 21:17:02.644747   29546 main.go:141] libmachine: (ha-797954-m03)       <source network='default'/>
	I0920 21:17:02.644754   29546 main.go:141] libmachine: (ha-797954-m03)       <model type='virtio'/>
	I0920 21:17:02.644759   29546 main.go:141] libmachine: (ha-797954-m03)     </interface>
	I0920 21:17:02.644765   29546 main.go:141] libmachine: (ha-797954-m03)     <serial type='pty'>
	I0920 21:17:02.644770   29546 main.go:141] libmachine: (ha-797954-m03)       <target port='0'/>
	I0920 21:17:02.644776   29546 main.go:141] libmachine: (ha-797954-m03)     </serial>
	I0920 21:17:02.644783   29546 main.go:141] libmachine: (ha-797954-m03)     <console type='pty'>
	I0920 21:17:02.644798   29546 main.go:141] libmachine: (ha-797954-m03)       <target type='serial' port='0'/>
	I0920 21:17:02.644809   29546 main.go:141] libmachine: (ha-797954-m03)     </console>
	I0920 21:17:02.644818   29546 main.go:141] libmachine: (ha-797954-m03)     <rng model='virtio'>
	I0920 21:17:02.644842   29546 main.go:141] libmachine: (ha-797954-m03)       <backend model='random'>/dev/random</backend>
	I0920 21:17:02.644850   29546 main.go:141] libmachine: (ha-797954-m03)     </rng>
	I0920 21:17:02.644855   29546 main.go:141] libmachine: (ha-797954-m03)     
	I0920 21:17:02.644858   29546 main.go:141] libmachine: (ha-797954-m03)     
	I0920 21:17:02.644865   29546 main.go:141] libmachine: (ha-797954-m03)   </devices>
	I0920 21:17:02.644875   29546 main.go:141] libmachine: (ha-797954-m03) </domain>
	I0920 21:17:02.644883   29546 main.go:141] libmachine: (ha-797954-m03) 
	I0920 21:17:02.651978   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:63:a3:c0 in network default
	I0920 21:17:02.652689   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:02.652709   29546 main.go:141] libmachine: (ha-797954-m03) Ensuring networks are active...
	I0920 21:17:02.653425   29546 main.go:141] libmachine: (ha-797954-m03) Ensuring network default is active
	I0920 21:17:02.653794   29546 main.go:141] libmachine: (ha-797954-m03) Ensuring network mk-ha-797954 is active
	I0920 21:17:02.654206   29546 main.go:141] libmachine: (ha-797954-m03) Getting domain xml...
	I0920 21:17:02.654891   29546 main.go:141] libmachine: (ha-797954-m03) Creating domain...
	I0920 21:17:03.891211   29546 main.go:141] libmachine: (ha-797954-m03) Waiting to get IP...
	I0920 21:17:03.891998   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:03.892564   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:03.892584   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:03.892529   30348 retry.go:31] will retry after 249.905879ms: waiting for machine to come up
	I0920 21:17:04.144187   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:04.144615   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:04.144661   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:04.144597   30348 retry.go:31] will retry after 316.328897ms: waiting for machine to come up
	I0920 21:17:04.462062   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:04.462511   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:04.462533   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:04.462462   30348 retry.go:31] will retry after 364.78104ms: waiting for machine to come up
	I0920 21:17:04.828892   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:04.829305   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:04.829331   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:04.829266   30348 retry.go:31] will retry after 450.317187ms: waiting for machine to come up
	I0920 21:17:05.280860   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:05.281373   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:05.281406   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:05.281315   30348 retry.go:31] will retry after 476.652629ms: waiting for machine to come up
	I0920 21:17:05.760024   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:05.760485   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:05.760518   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:05.760438   30348 retry.go:31] will retry after 666.650329ms: waiting for machine to come up
	I0920 21:17:06.428088   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:06.428570   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:06.428605   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:06.428544   30348 retry.go:31] will retry after 895.276152ms: waiting for machine to come up
	I0920 21:17:07.325647   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:07.326053   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:07.326072   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:07.326016   30348 retry.go:31] will retry after 1.229974937s: waiting for machine to come up
	I0920 21:17:08.557784   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:08.558294   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:08.558321   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:08.558256   30348 retry.go:31] will retry after 1.130091493s: waiting for machine to come up
	I0920 21:17:09.689958   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:09.690504   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:09.690534   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:09.690449   30348 retry.go:31] will retry after 1.704043253s: waiting for machine to come up
	I0920 21:17:11.396325   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:11.396797   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:11.396820   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:11.396759   30348 retry.go:31] will retry after 2.089116318s: waiting for machine to come up
	I0920 21:17:13.487459   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:13.487939   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:13.487964   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:13.487890   30348 retry.go:31] will retry after 2.803235066s: waiting for machine to come up
	I0920 21:17:16.292437   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:16.292814   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:16.292830   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:16.292780   30348 retry.go:31] will retry after 3.964680632s: waiting for machine to come up
	I0920 21:17:20.260043   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:20.260437   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:20.260457   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:20.260397   30348 retry.go:31] will retry after 5.26446379s: waiting for machine to come up
	I0920 21:17:25.528157   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:25.528572   29546 main.go:141] libmachine: (ha-797954-m03) Found IP for machine: 192.168.39.108
	I0920 21:17:25.528600   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has current primary IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:25.528606   29546 main.go:141] libmachine: (ha-797954-m03) Reserving static IP address...
	I0920 21:17:25.529052   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find host DHCP lease matching {name: "ha-797954-m03", mac: "52:54:00:37:e1:1a", ip: "192.168.39.108"} in network mk-ha-797954
	I0920 21:17:25.602757   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Getting to WaitForSSH function...
	I0920 21:17:25.602780   29546 main.go:141] libmachine: (ha-797954-m03) Reserved static IP address: 192.168.39.108
	I0920 21:17:25.602810   29546 main.go:141] libmachine: (ha-797954-m03) Waiting for SSH to be available...
	I0920 21:17:25.605777   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:25.606165   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954
	I0920 21:17:25.606196   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find defined IP address of network mk-ha-797954 interface with MAC address 52:54:00:37:e1:1a
	I0920 21:17:25.606399   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using SSH client type: external
	I0920 21:17:25.606418   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa (-rw-------)
	I0920 21:17:25.606467   29546 main.go:141] libmachine: (ha-797954-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:17:25.606479   29546 main.go:141] libmachine: (ha-797954-m03) DBG | About to run SSH command:
	I0920 21:17:25.606491   29546 main.go:141] libmachine: (ha-797954-m03) DBG | exit 0
	I0920 21:17:25.610090   29546 main.go:141] libmachine: (ha-797954-m03) DBG | SSH cmd err, output: exit status 255: 
	I0920 21:17:25.610107   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0920 21:17:25.610116   29546 main.go:141] libmachine: (ha-797954-m03) DBG | command : exit 0
	I0920 21:17:25.610123   29546 main.go:141] libmachine: (ha-797954-m03) DBG | err     : exit status 255
	I0920 21:17:25.610132   29546 main.go:141] libmachine: (ha-797954-m03) DBG | output  : 
	I0920 21:17:28.610361   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Getting to WaitForSSH function...
	I0920 21:17:28.612774   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.613300   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:28.613345   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.613482   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using SSH client type: external
	I0920 21:17:28.613508   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa (-rw-------)
	I0920 21:17:28.613536   29546 main.go:141] libmachine: (ha-797954-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:17:28.613547   29546 main.go:141] libmachine: (ha-797954-m03) DBG | About to run SSH command:
	I0920 21:17:28.613558   29546 main.go:141] libmachine: (ha-797954-m03) DBG | exit 0
	I0920 21:17:28.739518   29546 main.go:141] libmachine: (ha-797954-m03) DBG | SSH cmd err, output: <nil>: 
	I0920 21:17:28.739782   29546 main.go:141] libmachine: (ha-797954-m03) KVM machine creation complete!
	I0920 21:17:28.740050   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetConfigRaw
	I0920 21:17:28.740606   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:28.740766   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:28.740895   29546 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 21:17:28.740905   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetState
	I0920 21:17:28.742248   29546 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 21:17:28.742261   29546 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 21:17:28.742266   29546 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 21:17:28.742271   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:28.744450   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.744848   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:28.744878   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.745030   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:28.745195   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.745319   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.745434   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:28.745605   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:28.745786   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:28.745797   29546 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 21:17:28.858225   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:17:28.858252   29546 main.go:141] libmachine: Detecting the provisioner...
	I0920 21:17:28.858263   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:28.860945   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.861330   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:28.861367   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.861582   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:28.861765   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.861923   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.862049   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:28.862220   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:28.862409   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:28.862421   29546 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 21:17:28.971754   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 21:17:28.971833   29546 main.go:141] libmachine: found compatible host: buildroot
	I0920 21:17:28.971843   29546 main.go:141] libmachine: Provisioning with buildroot...
	I0920 21:17:28.971858   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetMachineName
	I0920 21:17:28.972111   29546 buildroot.go:166] provisioning hostname "ha-797954-m03"
	I0920 21:17:28.972133   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetMachineName
	I0920 21:17:28.972282   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:28.974838   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.975230   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:28.975254   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.975370   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:28.975557   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.975667   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.975797   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:28.975919   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:28.976108   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:28.976126   29546 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-797954-m03 && echo "ha-797954-m03" | sudo tee /etc/hostname
	I0920 21:17:29.102868   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-797954-m03
	
	I0920 21:17:29.102918   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.106396   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.106814   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.106838   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.107105   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:29.107316   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.107475   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.107629   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:29.107783   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:29.107967   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:29.107983   29546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-797954-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-797954-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-797954-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 21:17:29.224170   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:17:29.224200   29546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 21:17:29.224221   29546 buildroot.go:174] setting up certificates
	I0920 21:17:29.224238   29546 provision.go:84] configureAuth start
	I0920 21:17:29.224249   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetMachineName
	I0920 21:17:29.224497   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetIP
	I0920 21:17:29.228199   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.228591   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.228621   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.228737   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.230940   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.231290   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.231315   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.231436   29546 provision.go:143] copyHostCerts
	I0920 21:17:29.231461   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:17:29.231488   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 21:17:29.231497   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:17:29.231560   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 21:17:29.231661   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:17:29.231679   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 21:17:29.231683   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:17:29.231706   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 21:17:29.231764   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:17:29.231786   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 21:17:29.231795   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:17:29.231816   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 21:17:29.231871   29546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.ha-797954-m03 san=[127.0.0.1 192.168.39.108 ha-797954-m03 localhost minikube]
	I0920 21:17:29.482373   29546 provision.go:177] copyRemoteCerts
	I0920 21:17:29.482434   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 21:17:29.482467   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.485158   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.485535   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.485562   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.485735   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:29.485957   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.486154   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:29.486290   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa Username:docker}
	I0920 21:17:29.571854   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 21:17:29.571937   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 21:17:29.597446   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 21:17:29.597536   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 21:17:29.623922   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 21:17:29.624016   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 21:17:29.649582   29546 provision.go:87] duration metric: took 425.330124ms to configureAuth
	I0920 21:17:29.649619   29546 buildroot.go:189] setting minikube options for container-runtime
	I0920 21:17:29.649891   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:17:29.650000   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.652875   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.653331   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.653362   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.653612   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:29.653807   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.653962   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.654098   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:29.654239   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:29.654427   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:29.654450   29546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 21:17:29.884767   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 21:17:29.884803   29546 main.go:141] libmachine: Checking connection to Docker...
	I0920 21:17:29.884813   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetURL
	I0920 21:17:29.886065   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using libvirt version 6000000
	I0920 21:17:29.888305   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.888643   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.888670   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.888892   29546 main.go:141] libmachine: Docker is up and running!
	I0920 21:17:29.888907   29546 main.go:141] libmachine: Reticulating splines...
	I0920 21:17:29.888914   29546 client.go:171] duration metric: took 27.541211935s to LocalClient.Create
	I0920 21:17:29.888937   29546 start.go:167] duration metric: took 27.541269184s to libmachine.API.Create "ha-797954"
	I0920 21:17:29.888947   29546 start.go:293] postStartSetup for "ha-797954-m03" (driver="kvm2")
	I0920 21:17:29.888957   29546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 21:17:29.888974   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:29.889224   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 21:17:29.889254   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.891311   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.891621   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.891640   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.891801   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:29.891981   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.892102   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:29.892225   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa Username:docker}
	I0920 21:17:29.978349   29546 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 21:17:29.983427   29546 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 21:17:29.983455   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 21:17:29.983538   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 21:17:29.983634   29546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 21:17:29.983646   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /etc/ssl/certs/167852.pem
	I0920 21:17:29.983753   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 21:17:29.995291   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:17:30.022903   29546 start.go:296] duration metric: took 133.944509ms for postStartSetup
	I0920 21:17:30.022958   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetConfigRaw
	I0920 21:17:30.023537   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetIP
	I0920 21:17:30.026168   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.026517   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.026545   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.026878   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:17:30.027177   29546 start.go:128] duration metric: took 27.698265683s to createHost
	I0920 21:17:30.027205   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:30.029751   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.030099   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.030120   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.030296   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:30.030478   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:30.030638   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:30.030742   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:30.030890   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:30.031075   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:30.031086   29546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 21:17:30.140028   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726867050.118100270
	
	I0920 21:17:30.140050   29546 fix.go:216] guest clock: 1726867050.118100270
	I0920 21:17:30.140059   29546 fix.go:229] Guest: 2024-09-20 21:17:30.11810027 +0000 UTC Remote: 2024-09-20 21:17:30.027192785 +0000 UTC m=+151.968393428 (delta=90.907485ms)
	I0920 21:17:30.140081   29546 fix.go:200] guest clock delta is within tolerance: 90.907485ms
	I0920 21:17:30.140087   29546 start.go:83] releasing machines lock for "ha-797954-m03", held for 27.811283914s
	I0920 21:17:30.140109   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:30.140335   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetIP
	I0920 21:17:30.142848   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.143223   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.143247   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.145549   29546 out.go:177] * Found network options:
	I0920 21:17:30.147066   29546 out.go:177]   - NO_PROXY=192.168.39.240,192.168.39.63
	W0920 21:17:30.148256   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 21:17:30.148273   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 21:17:30.148287   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:30.148801   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:30.149026   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:30.149134   29546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 21:17:30.149168   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	W0920 21:17:30.149244   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 21:17:30.149266   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 21:17:30.149325   29546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 21:17:30.149344   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:30.151785   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.152017   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.152092   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.152116   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.152242   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:30.152398   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.152420   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.152411   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:30.152594   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:30.152619   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:30.152752   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:30.152750   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa Username:docker}
	I0920 21:17:30.152866   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:30.152988   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa Username:docker}
	I0920 21:17:30.415462   29546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 21:17:30.422013   29546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 21:17:30.422074   29546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 21:17:30.439757   29546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 21:17:30.439785   29546 start.go:495] detecting cgroup driver to use...
	I0920 21:17:30.439851   29546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 21:17:30.457470   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 21:17:30.472261   29546 docker.go:217] disabling cri-docker service (if available) ...
	I0920 21:17:30.472327   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 21:17:30.488825   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 21:17:30.504407   29546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 21:17:30.628900   29546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 21:17:30.786553   29546 docker.go:233] disabling docker service ...
	I0920 21:17:30.786634   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 21:17:30.801796   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 21:17:30.815995   29546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 21:17:30.961390   29546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 21:17:31.087877   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 21:17:31.103154   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 21:17:31.123733   29546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 21:17:31.123806   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.135304   29546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 21:17:31.135368   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.147157   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.158091   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.169385   29546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 21:17:31.181217   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.192527   29546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.210306   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.221334   29546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 21:17:31.233227   29546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 21:17:31.233302   29546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 21:17:31.249920   29546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 21:17:31.260187   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:17:31.382201   29546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 21:17:31.482956   29546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 21:17:31.483037   29546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 21:17:31.488476   29546 start.go:563] Will wait 60s for crictl version
	I0920 21:17:31.488554   29546 ssh_runner.go:195] Run: which crictl
	I0920 21:17:31.492584   29546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 21:17:31.538709   29546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 21:17:31.538785   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:17:31.572182   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:17:31.603894   29546 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 21:17:31.605343   29546 out.go:177]   - env NO_PROXY=192.168.39.240
	I0920 21:17:31.606667   29546 out.go:177]   - env NO_PROXY=192.168.39.240,192.168.39.63
	I0920 21:17:31.607748   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetIP
	I0920 21:17:31.610281   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:31.610645   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:31.610679   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:31.610868   29546 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 21:17:31.615319   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:17:31.628360   29546 mustload.go:65] Loading cluster: ha-797954
	I0920 21:17:31.628620   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:17:31.628963   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:17:31.629005   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:17:31.643946   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
	I0920 21:17:31.644427   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:17:31.644862   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:17:31.644878   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:17:31.645201   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:17:31.645370   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:17:31.646870   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:17:31.647201   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:17:31.647238   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:17:31.662472   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0920 21:17:31.662941   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:17:31.663429   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:17:31.663454   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:17:31.663746   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:17:31.663910   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:17:31.664076   29546 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954 for IP: 192.168.39.108
	I0920 21:17:31.664086   29546 certs.go:194] generating shared ca certs ...
	I0920 21:17:31.664103   29546 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:17:31.664243   29546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 21:17:31.664295   29546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 21:17:31.664308   29546 certs.go:256] generating profile certs ...
	I0920 21:17:31.664400   29546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key
	I0920 21:17:31.664432   29546 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9a277b8a
	I0920 21:17:31.664451   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9a277b8a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.63 192.168.39.108 192.168.39.254]
	I0920 21:17:31.999984   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9a277b8a ...
	I0920 21:17:32.000017   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9a277b8a: {Name:mkb265b9a0e489fa9c79757fad17ff69e435c7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:17:32.000213   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9a277b8a ...
	I0920 21:17:32.000228   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9a277b8a: {Name:mk577de20bc228548ac6019ec38c5032817178c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:17:32.000340   29546 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9a277b8a -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt
	I0920 21:17:32.000512   29546 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9a277b8a -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key
	I0920 21:17:32.000680   29546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key
	I0920 21:17:32.000701   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 21:17:32.000718   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 21:17:32.000728   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 21:17:32.000741   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 21:17:32.000756   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 21:17:32.000773   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 21:17:32.000790   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 21:17:32.019102   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 21:17:32.019214   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 21:17:32.019253   29546 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 21:17:32.019263   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 21:17:32.019285   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 21:17:32.019307   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 21:17:32.019328   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 21:17:32.019367   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:17:32.019392   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /usr/share/ca-certificates/167852.pem
	I0920 21:17:32.019407   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:17:32.019422   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem -> /usr/share/ca-certificates/16785.pem
	I0920 21:17:32.019452   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:17:32.022485   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:17:32.022895   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:17:32.022924   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:17:32.023142   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:17:32.023341   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:17:32.023495   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:17:32.023639   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:17:32.099301   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 21:17:32.104861   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 21:17:32.119922   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 21:17:32.124156   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0920 21:17:32.134999   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 21:17:32.139238   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 21:17:32.149597   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 21:17:32.154030   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0920 21:17:32.167439   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 21:17:32.172928   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 21:17:32.184258   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 21:17:32.189404   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0920 21:17:32.201402   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 21:17:32.227964   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 21:17:32.252476   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 21:17:32.280434   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 21:17:32.305098   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0920 21:17:32.330545   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 21:17:32.355250   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 21:17:32.380481   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 21:17:32.409052   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 21:17:32.434675   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 21:17:32.460671   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 21:17:32.485722   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 21:17:32.503366   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0920 21:17:32.520848   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 21:17:32.537950   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0920 21:17:32.555973   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 21:17:32.574604   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0920 21:17:32.592470   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 21:17:32.612075   29546 ssh_runner.go:195] Run: openssl version
	I0920 21:17:32.618522   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 21:17:32.629763   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:17:32.634570   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:17:32.634649   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:17:32.640735   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 21:17:32.651846   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 21:17:32.663634   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 21:17:32.668436   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:17:32.668505   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 21:17:32.674910   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 21:17:32.686102   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 21:17:32.697672   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 21:17:32.702132   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:17:32.702197   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 21:17:32.707913   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 21:17:32.719066   29546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:17:32.723392   29546 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 21:17:32.723441   29546 kubeadm.go:934] updating node {m03 192.168.39.108 8443 v1.31.1 crio true true} ...
	I0920 21:17:32.723510   29546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-797954-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 21:17:32.723543   29546 kube-vip.go:115] generating kube-vip config ...
	I0920 21:17:32.723582   29546 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 21:17:32.740751   29546 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 21:17:32.740840   29546 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 21:17:32.740906   29546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 21:17:32.750919   29546 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 21:17:32.750992   29546 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 21:17:32.760956   29546 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 21:17:32.760966   29546 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 21:17:32.760984   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 21:17:32.760984   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 21:17:32.761008   29546 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 21:17:32.761029   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 21:17:32.761047   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 21:17:32.761047   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:17:32.770258   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 21:17:32.770302   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 21:17:32.770469   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 21:17:32.770497   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 21:17:32.785590   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 21:17:32.785720   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 21:17:32.887818   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 21:17:32.887866   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 21:17:33.724236   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 21:17:33.735002   29546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 21:17:33.752570   29546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 21:17:33.770318   29546 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 21:17:33.790184   29546 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 21:17:33.794843   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:17:33.808084   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:17:33.931534   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:17:33.948935   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:17:33.949425   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:17:33.949481   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:17:33.965177   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I0920 21:17:33.966119   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:17:33.966693   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:17:33.966718   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:17:33.967135   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:17:33.967319   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:17:33.967487   29546 start.go:317] joinCluster: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:17:33.967629   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 21:17:33.967652   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:17:33.970529   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:17:33.970999   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:17:33.971028   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:17:33.971157   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:17:33.971336   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:17:33.971484   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:17:33.971611   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:17:34.158062   29546 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:17:34.158116   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token au5bfe.w4m98std58j4wmo1 --discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-797954-m03 --control-plane --apiserver-advertise-address=192.168.39.108 --apiserver-bind-port=8443"
	I0920 21:17:56.483713   29546 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token au5bfe.w4m98std58j4wmo1 --discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-797954-m03 --control-plane --apiserver-advertise-address=192.168.39.108 --apiserver-bind-port=8443": (22.325561758s)
	I0920 21:17:56.483748   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 21:17:57.184683   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-797954-m03 minikube.k8s.io/updated_at=2024_09_20T21_17_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=ha-797954 minikube.k8s.io/primary=false
	I0920 21:17:57.333060   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-797954-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 21:17:57.459382   29546 start.go:319] duration metric: took 23.491890367s to joinCluster
	I0920 21:17:57.459461   29546 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:17:57.459752   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:17:57.461273   29546 out.go:177] * Verifying Kubernetes components...
	I0920 21:17:57.463650   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:17:57.782773   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:17:57.853744   29546 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:17:57.854084   29546 kapi.go:59] client config for ha-797954: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key", CAFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 21:17:57.854172   29546 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.240:8443
	I0920 21:17:57.854425   29546 node_ready.go:35] waiting up to 6m0s for node "ha-797954-m03" to be "Ready" ...
	I0920 21:17:57.854502   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:57.854511   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:57.854521   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:57.854526   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:57.858388   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:58.354955   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:58.355019   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:58.355031   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:58.355038   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:58.358738   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:58.855362   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:58.855392   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:58.855404   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:58.855410   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:58.859676   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:59.355007   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:59.355039   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:59.355047   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:59.355051   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:59.358804   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:59.855573   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:59.855597   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:59.855608   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:59.855615   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:59.859651   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:59.860298   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:00.355648   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:00.355670   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:00.355679   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:00.355683   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:00.359549   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:00.854939   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:00.854979   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:00.854989   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:00.854994   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:00.858910   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:01.354629   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:01.354654   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:01.354665   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:01.354671   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:01.358103   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:01.855123   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:01.855148   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:01.855160   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:01.855167   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:01.858757   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:02.355433   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:02.355460   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:02.355470   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:02.355478   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:02.358859   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:02.359609   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:02.854611   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:02.854637   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:02.854646   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:02.854651   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:02.858023   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:03.355696   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:03.355726   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:03.355738   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:03.355743   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:03.359438   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:03.854679   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:03.854705   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:03.854714   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:03.854718   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:03.858836   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:04.355520   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:04.355553   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:04.355565   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:04.355572   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:04.359190   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:04.359707   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:04.855114   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:04.855137   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:04.855146   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:04.855151   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:04.859053   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:05.355120   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:05.355148   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:05.355160   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:05.355166   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:05.359138   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:05.855384   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:05.855416   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:05.855429   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:05.855436   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:05.860675   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:18:06.354757   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:06.354801   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:06.354812   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:06.354818   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:06.358624   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:06.854603   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:06.854626   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:06.854636   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:06.854654   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:06.868553   29546 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0920 21:18:06.869190   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:07.354649   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:07.354676   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:07.354687   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:07.354700   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:07.359137   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:07.855272   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:07.855298   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:07.855322   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:07.855332   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:07.859818   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:08.354945   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:08.354975   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:08.354983   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:08.354987   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:08.358743   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:08.854654   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:08.854694   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:08.854705   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:08.854726   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:08.858133   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:09.354680   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:09.354702   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:09.354708   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:09.354711   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:09.358286   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:09.358822   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:09.855309   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:09.855334   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:09.855343   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:09.855347   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:09.859448   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:10.355394   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:10.355413   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:10.355423   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:10.355429   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:10.360033   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:10.855581   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:10.855604   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:10.855615   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:10.855622   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:10.859498   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:11.355312   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:11.355337   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.355345   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.355349   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.358651   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:11.359127   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:11.855144   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:11.855174   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.855184   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.855209   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.859059   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:11.859886   29546 node_ready.go:49] node "ha-797954-m03" has status "Ready":"True"
	I0920 21:18:11.859907   29546 node_ready.go:38] duration metric: took 14.005464709s for node "ha-797954-m03" to be "Ready" ...
	I0920 21:18:11.859917   29546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 21:18:11.859993   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:11.860006   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.860016   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.860021   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.868162   29546 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 21:18:11.876454   29546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.876533   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bkcf8
	I0920 21:18:11.876538   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.876545   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.876552   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.889831   29546 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0920 21:18:11.890530   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:11.890547   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.890556   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.890559   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.899152   29546 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 21:18:11.899662   29546 pod_ready.go:93] pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:11.899680   29546 pod_ready.go:82] duration metric: took 23.198308ms for pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.899690   29546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.899746   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-djhfr
	I0920 21:18:11.899753   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.899760   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.899765   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.905423   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:18:11.906584   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:11.906599   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.906607   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.906612   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.909991   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:11.910558   29546 pod_ready.go:93] pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:11.910576   29546 pod_ready.go:82] duration metric: took 10.880105ms for pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.910585   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.910644   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954
	I0920 21:18:11.910653   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.910660   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.910665   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.917040   29546 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 21:18:11.917709   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:11.917723   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.917731   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.917734   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.926063   29546 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 21:18:11.926748   29546 pod_ready.go:93] pod "etcd-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:11.926766   29546 pod_ready.go:82] duration metric: took 16.17481ms for pod "etcd-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.926778   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.926852   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954-m02
	I0920 21:18:11.926862   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.926872   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.926879   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.932632   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:18:11.933455   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:11.933475   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.933486   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.933492   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.936415   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:18:11.936956   29546 pod_ready.go:93] pod "etcd-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:11.936977   29546 pod_ready.go:82] duration metric: took 10.190753ms for pod "etcd-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.936989   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.055265   29546 request.go:632] Waited for 118.210581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954-m03
	I0920 21:18:12.055358   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954-m03
	I0920 21:18:12.055367   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.055378   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.055387   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.060304   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:12.255635   29546 request.go:632] Waited for 194.221335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:12.255716   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:12.255723   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.255734   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.255739   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.259841   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:12.260392   29546 pod_ready.go:93] pod "etcd-ha-797954-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:12.260414   29546 pod_ready.go:82] duration metric: took 323.41646ms for pod "etcd-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.260439   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.455886   29546 request.go:632] Waited for 195.375581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954
	I0920 21:18:12.455950   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954
	I0920 21:18:12.455955   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.455963   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.455971   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.459141   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:12.656082   29546 request.go:632] Waited for 196.149744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:12.656147   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:12.656164   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.656172   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.656176   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.659742   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:12.660232   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:12.660250   29546 pod_ready.go:82] duration metric: took 399.803876ms for pod "kube-apiserver-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.660259   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.855251   29546 request.go:632] Waited for 194.932878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m02
	I0920 21:18:12.855338   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m02
	I0920 21:18:12.855348   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.855357   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.855369   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.861682   29546 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 21:18:13.055384   29546 request.go:632] Waited for 192.289197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:13.055437   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:13.055442   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.055449   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.055454   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.058687   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.059244   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:13.059263   29546 pod_ready.go:82] duration metric: took 398.997761ms for pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.059272   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.256048   29546 request.go:632] Waited for 196.689932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m03
	I0920 21:18:13.256118   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m03
	I0920 21:18:13.256126   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.256135   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.256141   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.259220   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.455932   29546 request.go:632] Waited for 195.902068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:13.456005   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:13.456028   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.456039   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.456043   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.459696   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.460211   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:13.460227   29546 pod_ready.go:82] duration metric: took 400.948779ms for pod "kube-apiserver-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.460237   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.655197   29546 request.go:632] Waited for 194.89988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954
	I0920 21:18:13.655267   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954
	I0920 21:18:13.655282   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.655309   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.655321   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.658570   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.855939   29546 request.go:632] Waited for 196.380436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:13.855992   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:13.855997   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.856004   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.856006   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.859600   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.860576   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:13.860591   29546 pod_ready.go:82] duration metric: took 400.349149ms for pod "kube-controller-manager-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.860600   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.055168   29546 request.go:632] Waited for 194.505202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m02
	I0920 21:18:14.055234   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m02
	I0920 21:18:14.055241   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.055251   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.055260   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.058275   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:18:14.255766   29546 request.go:632] Waited for 196.386628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:14.255858   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:14.255870   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.255879   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.255887   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.259522   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:14.260240   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:14.260258   29546 pod_ready.go:82] duration metric: took 399.650728ms for pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.260270   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.455432   29546 request.go:632] Waited for 195.087635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m03
	I0920 21:18:14.455501   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m03
	I0920 21:18:14.455506   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.455516   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.455526   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.458903   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:14.655253   29546 request.go:632] Waited for 195.299076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:14.655305   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:14.655310   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.655317   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.655324   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.658799   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:14.659327   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:14.659347   29546 pod_ready.go:82] duration metric: took 399.070103ms for pod "kube-controller-manager-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.659357   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2pjxq" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.855420   29546 request.go:632] Waited for 195.985286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pjxq
	I0920 21:18:14.855474   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pjxq
	I0920 21:18:14.855480   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.855491   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.855500   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.859401   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.055482   29546 request.go:632] Waited for 195.388569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:15.055554   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:15.055561   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.055571   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.055577   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.059246   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.059980   29546 pod_ready.go:93] pod "kube-proxy-2pjxq" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:15.060007   29546 pod_ready.go:82] duration metric: took 400.64405ms for pod "kube-proxy-2pjxq" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.060015   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jskx5" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.256131   29546 request.go:632] Waited for 196.057373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jskx5
	I0920 21:18:15.256206   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jskx5
	I0920 21:18:15.256211   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.256217   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.256220   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.260016   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.456032   29546 request.go:632] Waited for 195.376218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:15.456111   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:15.456119   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.456127   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.456134   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.459253   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.459960   29546 pod_ready.go:93] pod "kube-proxy-jskx5" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:15.459978   29546 pod_ready.go:82] duration metric: took 399.957197ms for pod "kube-proxy-jskx5" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.459988   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qx9ph" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.656143   29546 request.go:632] Waited for 196.090186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qx9ph
	I0920 21:18:15.656203   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qx9ph
	I0920 21:18:15.656215   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.656234   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.656239   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.660293   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:15.855333   29546 request.go:632] Waited for 194.281869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:15.855386   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:15.855391   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.855398   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.855404   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.859011   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.859698   29546 pod_ready.go:93] pod "kube-proxy-qx9ph" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:15.859721   29546 pod_ready.go:82] duration metric: took 399.724688ms for pod "kube-proxy-qx9ph" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.859733   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.055835   29546 request.go:632] Waited for 196.022812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954
	I0920 21:18:16.055896   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954
	I0920 21:18:16.055901   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.055909   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.055916   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.059373   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:16.256147   29546 request.go:632] Waited for 196.031182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:16.256218   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:16.256225   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.256233   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.256236   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.259485   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:16.260205   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:16.260224   29546 pod_ready.go:82] duration metric: took 400.483331ms for pod "kube-scheduler-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.260233   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.456145   29546 request.go:632] Waited for 195.844042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m02
	I0920 21:18:16.456197   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m02
	I0920 21:18:16.456203   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.456211   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.456216   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.459396   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:16.655597   29546 request.go:632] Waited for 195.489863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:16.655679   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:16.655686   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.655697   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.655702   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.659872   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:16.660490   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:16.660506   29546 pod_ready.go:82] duration metric: took 400.26753ms for pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.660515   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.856056   29546 request.go:632] Waited for 195.455765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m03
	I0920 21:18:16.856110   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m03
	I0920 21:18:16.856115   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.856123   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.856132   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.860019   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:17.056049   29546 request.go:632] Waited for 195.339366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:17.056101   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:17.056106   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.056113   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.056117   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.060195   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:17.060851   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:17.060872   29546 pod_ready.go:82] duration metric: took 400.349558ms for pod "kube-scheduler-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:17.060887   29546 pod_ready.go:39] duration metric: took 5.200957368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 21:18:17.060904   29546 api_server.go:52] waiting for apiserver process to appear ...
	I0920 21:18:17.060962   29546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 21:18:17.079611   29546 api_server.go:72] duration metric: took 19.620113639s to wait for apiserver process to appear ...
	I0920 21:18:17.079636   29546 api_server.go:88] waiting for apiserver healthz status ...
	I0920 21:18:17.079653   29546 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0920 21:18:17.084062   29546 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0920 21:18:17.084125   29546 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0920 21:18:17.084133   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.084141   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.084149   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.085207   29546 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0920 21:18:17.085276   29546 api_server.go:141] control plane version: v1.31.1
	I0920 21:18:17.085291   29546 api_server.go:131] duration metric: took 5.649897ms to wait for apiserver health ...
	I0920 21:18:17.085298   29546 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 21:18:17.256039   29546 request.go:632] Waited for 170.668533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:17.256105   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:17.256110   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.256118   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.256123   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.262225   29546 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 21:18:17.269500   29546 system_pods.go:59] 24 kube-system pods found
	I0920 21:18:17.269529   29546 system_pods.go:61] "coredns-7c65d6cfc9-bkcf8" [5d5aca96-de3a-4154-b195-b69aa45e2cfc] Running
	I0920 21:18:17.269535   29546 system_pods.go:61] "coredns-7c65d6cfc9-djhfr" [a8ca35db-3f4c-4e4c-a9e4-e12abecff539] Running
	I0920 21:18:17.269539   29546 system_pods.go:61] "etcd-ha-797954" [ce647dfd-9ff8-483e-ac64-e6cbed777dfa] Running
	I0920 21:18:17.269544   29546 system_pods.go:61] "etcd-ha-797954-m02" [d8c6a164-a169-482f-a05f-0e723b947424] Running
	I0920 21:18:17.269549   29546 system_pods.go:61] "etcd-ha-797954-m03" [8cc13a8d-b936-4a85-95a8-7cc0d0b63df0] Running
	I0920 21:18:17.269554   29546 system_pods.go:61] "kindnet-6lk2f" [ddd6c393-68ce-4fff-9f0d-6e0147e03926] Running
	I0920 21:18:17.269561   29546 system_pods.go:61] "kindnet-qh5rt" [2036234c-15c7-43da-ac52-cdac77f1a721] Running
	I0920 21:18:17.269569   29546 system_pods.go:61] "kindnet-tm8ml" [015073b8-e223-49d7-80cb-03d0ced5e0b5] Running
	I0920 21:18:17.269577   29546 system_pods.go:61] "kube-apiserver-ha-797954" [91b4d438-70c1-4335-a0eb-9d48bd470b37] Running
	I0920 21:18:17.269584   29546 system_pods.go:61] "kube-apiserver-ha-797954-m02" [d0634b88-e668-4be0-b175-543f5f496efa] Running
	I0920 21:18:17.269592   29546 system_pods.go:61] "kube-apiserver-ha-797954-m03" [567aeb50-886c-47d8-b285-d93f6ea8947f] Running
	I0920 21:18:17.269597   29546 system_pods.go:61] "kube-controller-manager-ha-797954" [7bcf4459-7067-4f10-a79d-a4cd0017fffb] Running
	I0920 21:18:17.269606   29546 system_pods.go:61] "kube-controller-manager-ha-797954-m02" [4366a95b-15ec-42c7-916d-14eb0dba314d] Running
	I0920 21:18:17.269609   29546 system_pods.go:61] "kube-controller-manager-ha-797954-m03" [b1366ddb-c6da-4dec-9da7-5544133e7d48] Running
	I0920 21:18:17.269615   29546 system_pods.go:61] "kube-proxy-2pjxq" [bbd40eaa-4d03-491a-96c6-ef4cedc6401a] Running
	I0920 21:18:17.269618   29546 system_pods.go:61] "kube-proxy-jskx5" [438fa076-db74-4876-b355-a143535b97fd] Running
	I0920 21:18:17.269624   29546 system_pods.go:61] "kube-proxy-qx9ph" [b3d20fb2-8eff-4921-add4-ee36b8fba334] Running
	I0920 21:18:17.269628   29546 system_pods.go:61] "kube-scheduler-ha-797954" [a55760dc-c4e4-4cec-b8d1-04512bd9ad33] Running
	I0920 21:18:17.269634   29546 system_pods.go:61] "kube-scheduler-ha-797954-m02" [4a53607e-2d10-4fcb-8697-bf042189871d] Running
	I0920 21:18:17.269640   29546 system_pods.go:61] "kube-scheduler-ha-797954-m03" [5586a8f2-f55f-4d05-a936-3f2a5756be15] Running
	I0920 21:18:17.269643   29546 system_pods.go:61] "kube-vip-ha-797954" [ad002c23-a891-4bf6-bbb7-bba95f43b1d0] Running
	I0920 21:18:17.269650   29546 system_pods.go:61] "kube-vip-ha-797954-m02" [09b5b6c2-96e7-43f1-a53e-21513fc048ed] Running
	I0920 21:18:17.269655   29546 system_pods.go:61] "kube-vip-ha-797954-m03" [b2310f23-42cf-4e3d-b6d3-e8b27a009742] Running
	I0920 21:18:17.269663   29546 system_pods.go:61] "storage-provisioner" [05385037-e9ea-4afb-a256-3bfae42e00f5] Running
	I0920 21:18:17.269674   29546 system_pods.go:74] duration metric: took 184.367541ms to wait for pod list to return data ...
	I0920 21:18:17.269686   29546 default_sa.go:34] waiting for default service account to be created ...
	I0920 21:18:17.455854   29546 request.go:632] Waited for 186.091197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0920 21:18:17.455909   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0920 21:18:17.455915   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.455925   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.455937   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.460043   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:17.460164   29546 default_sa.go:45] found service account: "default"
	I0920 21:18:17.460183   29546 default_sa.go:55] duration metric: took 190.487846ms for default service account to be created ...
	I0920 21:18:17.460191   29546 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 21:18:17.655637   29546 request.go:632] Waited for 195.376962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:17.655710   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:17.655718   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.655729   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.655734   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.661370   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:18:17.667683   29546 system_pods.go:86] 24 kube-system pods found
	I0920 21:18:17.667707   29546 system_pods.go:89] "coredns-7c65d6cfc9-bkcf8" [5d5aca96-de3a-4154-b195-b69aa45e2cfc] Running
	I0920 21:18:17.667713   29546 system_pods.go:89] "coredns-7c65d6cfc9-djhfr" [a8ca35db-3f4c-4e4c-a9e4-e12abecff539] Running
	I0920 21:18:17.667717   29546 system_pods.go:89] "etcd-ha-797954" [ce647dfd-9ff8-483e-ac64-e6cbed777dfa] Running
	I0920 21:18:17.667721   29546 system_pods.go:89] "etcd-ha-797954-m02" [d8c6a164-a169-482f-a05f-0e723b947424] Running
	I0920 21:18:17.667724   29546 system_pods.go:89] "etcd-ha-797954-m03" [8cc13a8d-b936-4a85-95a8-7cc0d0b63df0] Running
	I0920 21:18:17.667728   29546 system_pods.go:89] "kindnet-6lk2f" [ddd6c393-68ce-4fff-9f0d-6e0147e03926] Running
	I0920 21:18:17.667733   29546 system_pods.go:89] "kindnet-qh5rt" [2036234c-15c7-43da-ac52-cdac77f1a721] Running
	I0920 21:18:17.667738   29546 system_pods.go:89] "kindnet-tm8ml" [015073b8-e223-49d7-80cb-03d0ced5e0b5] Running
	I0920 21:18:17.667744   29546 system_pods.go:89] "kube-apiserver-ha-797954" [91b4d438-70c1-4335-a0eb-9d48bd470b37] Running
	I0920 21:18:17.667753   29546 system_pods.go:89] "kube-apiserver-ha-797954-m02" [d0634b88-e668-4be0-b175-543f5f496efa] Running
	I0920 21:18:17.667762   29546 system_pods.go:89] "kube-apiserver-ha-797954-m03" [567aeb50-886c-47d8-b285-d93f6ea8947f] Running
	I0920 21:18:17.667770   29546 system_pods.go:89] "kube-controller-manager-ha-797954" [7bcf4459-7067-4f10-a79d-a4cd0017fffb] Running
	I0920 21:18:17.667776   29546 system_pods.go:89] "kube-controller-manager-ha-797954-m02" [4366a95b-15ec-42c7-916d-14eb0dba314d] Running
	I0920 21:18:17.667784   29546 system_pods.go:89] "kube-controller-manager-ha-797954-m03" [b1366ddb-c6da-4dec-9da7-5544133e7d48] Running
	I0920 21:18:17.667788   29546 system_pods.go:89] "kube-proxy-2pjxq" [bbd40eaa-4d03-491a-96c6-ef4cedc6401a] Running
	I0920 21:18:17.667792   29546 system_pods.go:89] "kube-proxy-jskx5" [438fa076-db74-4876-b355-a143535b97fd] Running
	I0920 21:18:17.667796   29546 system_pods.go:89] "kube-proxy-qx9ph" [b3d20fb2-8eff-4921-add4-ee36b8fba334] Running
	I0920 21:18:17.667802   29546 system_pods.go:89] "kube-scheduler-ha-797954" [a55760dc-c4e4-4cec-b8d1-04512bd9ad33] Running
	I0920 21:18:17.667806   29546 system_pods.go:89] "kube-scheduler-ha-797954-m02" [4a53607e-2d10-4fcb-8697-bf042189871d] Running
	I0920 21:18:17.667812   29546 system_pods.go:89] "kube-scheduler-ha-797954-m03" [5586a8f2-f55f-4d05-a936-3f2a5756be15] Running
	I0920 21:18:17.667816   29546 system_pods.go:89] "kube-vip-ha-797954" [ad002c23-a891-4bf6-bbb7-bba95f43b1d0] Running
	I0920 21:18:17.667820   29546 system_pods.go:89] "kube-vip-ha-797954-m02" [09b5b6c2-96e7-43f1-a53e-21513fc048ed] Running
	I0920 21:18:17.667823   29546 system_pods.go:89] "kube-vip-ha-797954-m03" [b2310f23-42cf-4e3d-b6d3-e8b27a009742] Running
	I0920 21:18:17.667829   29546 system_pods.go:89] "storage-provisioner" [05385037-e9ea-4afb-a256-3bfae42e00f5] Running
	I0920 21:18:17.667835   29546 system_pods.go:126] duration metric: took 207.638426ms to wait for k8s-apps to be running ...
	I0920 21:18:17.667846   29546 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 21:18:17.667895   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:18:17.684265   29546 system_svc.go:56] duration metric: took 16.408361ms WaitForService to wait for kubelet
	I0920 21:18:17.684294   29546 kubeadm.go:582] duration metric: took 20.224798232s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 21:18:17.684316   29546 node_conditions.go:102] verifying NodePressure condition ...
	I0920 21:18:17.855695   29546 request.go:632] Waited for 171.306025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
	I0920 21:18:17.855757   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0920 21:18:17.855762   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.855769   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.855773   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.860127   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:17.861061   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:18:17.861078   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:18:17.861097   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:18:17.861107   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:18:17.861116   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:18:17.861121   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:18:17.861127   29546 node_conditions.go:105] duration metric: took 176.804918ms to run NodePressure ...
	I0920 21:18:17.861140   29546 start.go:241] waiting for startup goroutines ...
	I0920 21:18:17.861159   29546 start.go:255] writing updated cluster config ...
	I0920 21:18:17.861474   29546 ssh_runner.go:195] Run: rm -f paused
	I0920 21:18:17.914152   29546 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 21:18:17.916981   29546 out.go:177] * Done! kubectl is now configured to use "ha-797954" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.526012120Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867321525989470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9bf96e64-149f-4687-8f6b-a57f113fe983 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.526519083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4461e545-3311-4af7-8269-40f00d057fc5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.526598865Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4461e545-3311-4af7-8269-40f00d057fc5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.526844668Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867101601325723,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958883518913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958850406855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79,PodSandboxId:0ac27cef2729c0df7d987404a63f883a30be782372032aa31fcbe8a3dfa300a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726866958825667527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268669
46745728301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726866946560910908,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb,PodSandboxId:2ced8553e30128303b22fe9ff0bf8119fefc9729ff510070c138297b2edfcbb3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726866936873742655,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a54da6e69705211cd5f4d070f63bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726866934980260113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360,PodSandboxId:2c27a0b835060dbb1899a4af4c05a4fd816335f9b3f9fac0a5d7248e6ba3cbab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726866934952064340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726866934988370531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674,PodSandboxId:db69670c3a315720cd48ed17cf71e606baf7c5d3d62a1ff75d4dd1559ab9da38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726866934917412851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4461e545-3311-4af7-8269-40f00d057fc5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.574875112Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1e5d68bc-01e6-46c6-89fe-2ac4222ec960 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.574962792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1e5d68bc-01e6-46c6-89fe-2ac4222ec960 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.576247885Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9365d301-5e74-4087-8115-3847faa940b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.577110446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867321577084084,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9365d301-5e74-4087-8115-3847faa940b0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.577872826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a760ead-7c40-438a-b01c-ee8c496b4b5e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.577942962Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a760ead-7c40-438a-b01c-ee8c496b4b5e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.578183644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867101601325723,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958883518913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958850406855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79,PodSandboxId:0ac27cef2729c0df7d987404a63f883a30be782372032aa31fcbe8a3dfa300a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726866958825667527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268669
46745728301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726866946560910908,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb,PodSandboxId:2ced8553e30128303b22fe9ff0bf8119fefc9729ff510070c138297b2edfcbb3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726866936873742655,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a54da6e69705211cd5f4d070f63bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726866934980260113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360,PodSandboxId:2c27a0b835060dbb1899a4af4c05a4fd816335f9b3f9fac0a5d7248e6ba3cbab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726866934952064340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726866934988370531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674,PodSandboxId:db69670c3a315720cd48ed17cf71e606baf7c5d3d62a1ff75d4dd1559ab9da38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726866934917412851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a760ead-7c40-438a-b01c-ee8c496b4b5e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.620571875Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1366e2d6-2153-4fea-8857-41ab6f7161dc name=/runtime.v1.RuntimeService/Version
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.621112859Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1366e2d6-2153-4fea-8857-41ab6f7161dc name=/runtime.v1.RuntimeService/Version
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.623907034Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9512f1be-0231-4ac6-922b-b1d7395ec0dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.624334264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867321624310507,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9512f1be-0231-4ac6-922b-b1d7395ec0dd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.624946827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1becc077-023b-4798-9e28-0c63aa627859 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.625005424Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1becc077-023b-4798-9e28-0c63aa627859 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.625223989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867101601325723,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958883518913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958850406855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79,PodSandboxId:0ac27cef2729c0df7d987404a63f883a30be782372032aa31fcbe8a3dfa300a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726866958825667527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268669
46745728301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726866946560910908,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb,PodSandboxId:2ced8553e30128303b22fe9ff0bf8119fefc9729ff510070c138297b2edfcbb3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726866936873742655,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a54da6e69705211cd5f4d070f63bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726866934980260113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360,PodSandboxId:2c27a0b835060dbb1899a4af4c05a4fd816335f9b3f9fac0a5d7248e6ba3cbab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726866934952064340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726866934988370531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674,PodSandboxId:db69670c3a315720cd48ed17cf71e606baf7c5d3d62a1ff75d4dd1559ab9da38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726866934917412851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1becc077-023b-4798-9e28-0c63aa627859 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.675636432Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c0cd73dd-3ec9-4c75-b9e1-aa76191f682f name=/runtime.v1.RuntimeService/Version
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.675735998Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c0cd73dd-3ec9-4c75-b9e1-aa76191f682f name=/runtime.v1.RuntimeService/Version
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.677701297Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2e8a45dc-6c7c-4c0c-ae24-2e8aa1196cba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.678109412Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867321678088322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2e8a45dc-6c7c-4c0c-ae24-2e8aa1196cba name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.678795324Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a1a7c9c-6e39-4a55-9d6d-91ced967c108 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.678864858Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a1a7c9c-6e39-4a55-9d6d-91ced967c108 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:01 ha-797954 crio[666]: time="2024-09-20 21:22:01.679095732Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867101601325723,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958883518913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958850406855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79,PodSandboxId:0ac27cef2729c0df7d987404a63f883a30be782372032aa31fcbe8a3dfa300a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726866958825667527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268669
46745728301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726866946560910908,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb,PodSandboxId:2ced8553e30128303b22fe9ff0bf8119fefc9729ff510070c138297b2edfcbb3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726866936873742655,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a54da6e69705211cd5f4d070f63bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726866934980260113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360,PodSandboxId:2c27a0b835060dbb1899a4af4c05a4fd816335f9b3f9fac0a5d7248e6ba3cbab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726866934952064340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726866934988370531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674,PodSandboxId:db69670c3a315720cd48ed17cf71e606baf7c5d3d62a1ff75d4dd1559ab9da38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726866934917412851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a1a7c9c-6e39-4a55-9d6d-91ced967c108 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	468d1e1f7ac8a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   d6db457e8bbe8       busybox-7dff88458-44q29
	e3c9e6da67286       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   48e2a20451c46       coredns-7c65d6cfc9-djhfr
	0e272e92406da       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   891e1332d4c2d       coredns-7c65d6cfc9-bkcf8
	0fdd2277dc992       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   0ac27cef2729c       storage-provisioner
	d34d703bb77f7       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   794a246d61681       kindnet-6lk2f
	6367518948d4f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   457a98ab41d5f       kube-proxy-qx9ph
	371dadd621425       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   2ced8553e3012       kube-vip-ha-797954
	34e481bafb0d8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   0e7df03407c5d       kube-scheduler-ha-797954
	613c4c00096cd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   388cd6346f6a7       etcd-ha-797954
	3ec2d3bd57d8b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   2c27a0b835060       kube-controller-manager-ha-797954
	0d975d52ea292       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   db69670c3a315       kube-apiserver-ha-797954
	
	
	==> coredns [0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab] <==
	[INFO] 10.244.0.4:57168 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.029279393s
	[INFO] 10.244.1.2:56742 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000226618s
	[INFO] 10.244.1.2:46151 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000739523s
	[INFO] 10.244.2.2:36832 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000342649s
	[INFO] 10.244.0.4:48215 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142911s
	[INFO] 10.244.0.4:37051 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.021353644s
	[INFO] 10.244.0.4:56814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00025486s
	[INFO] 10.244.0.4:60619 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004011357s
	[INFO] 10.244.0.4:45781 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000219299s
	[INFO] 10.244.1.2:58895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003827s
	[INFO] 10.244.1.2:52799 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000348157s
	[INFO] 10.244.1.2:46521 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146401s
	[INFO] 10.244.1.2:58888 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000279379s
	[INFO] 10.244.2.2:45400 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149773s
	[INFO] 10.244.2.2:42656 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000284841s
	[INFO] 10.244.0.4:39831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168662s
	[INFO] 10.244.1.2:33336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145322s
	[INFO] 10.244.1.2:44953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000245865s
	[INFO] 10.244.2.2:58673 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183778s
	[INFO] 10.244.2.2:55978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198561s
	[INFO] 10.244.1.2:49222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209152s
	[INFO] 10.244.1.2:57989 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000183561s
	[INFO] 10.244.1.2:57201 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00017075s
	[INFO] 10.244.2.2:46144 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184056s
	[INFO] 10.244.2.2:53312 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152142s
	
	
	==> coredns [e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50] <==
	[INFO] 10.244.0.4:37614 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011425s
	[INFO] 10.244.1.2:60075 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002047091s
	[INFO] 10.244.1.2:53774 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137405s
	[INFO] 10.244.1.2:43436 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001572851s
	[INFO] 10.244.1.2:50084 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173888s
	[INFO] 10.244.2.2:46588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000281143s
	[INFO] 10.244.2.2:49861 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002195678s
	[INFO] 10.244.2.2:43162 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194152s
	[INFO] 10.244.2.2:38062 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149126s
	[INFO] 10.244.2.2:46161 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001726482s
	[INFO] 10.244.2.2:38358 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105606s
	[INFO] 10.244.0.4:34794 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107864s
	[INFO] 10.244.0.4:36461 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073572s
	[INFO] 10.244.0.4:59058 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059888s
	[INFO] 10.244.1.2:33254 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111604s
	[INFO] 10.244.1.2:55753 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094078s
	[INFO] 10.244.2.2:46481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128659s
	[INFO] 10.244.2.2:36242 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192291s
	[INFO] 10.244.0.4:45612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142384s
	[INFO] 10.244.0.4:32977 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000240882s
	[INFO] 10.244.0.4:36320 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157906s
	[INFO] 10.244.0.4:57209 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121261s
	[INFO] 10.244.1.2:52740 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116141s
	[INFO] 10.244.2.2:45858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145799s
	[INFO] 10.244.2.2:32912 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000224434s
	
	
	==> describe nodes <==
	Name:               ha-797954
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T21_15_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:15:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:21:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:18:45 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:18:45 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:18:45 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:18:45 +0000   Fri, 20 Sep 2024 21:15:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    ha-797954
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ee8e79db7754067acc26d8f79541cf4
	  System UUID:                1ee8e79d-b775-4067-acc2-6d8f79541cf4
	  Boot ID:                    2a01d700-0c79-4cac-934b-ff091b3f743b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-44q29              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 coredns-7c65d6cfc9-bkcf8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m16s
	  kube-system                 coredns-7c65d6cfc9-djhfr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m16s
	  kube-system                 etcd-ha-797954                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m21s
	  kube-system                 kindnet-6lk2f                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m17s
	  kube-system                 kube-apiserver-ha-797954             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 kube-controller-manager-ha-797954    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-proxy-qx9ph                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-scheduler-ha-797954             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-vip-ha-797954                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m14s  kube-proxy       
	  Normal  Starting                 6m21s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m21s  kubelet          Node ha-797954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s  kubelet          Node ha-797954 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s  kubelet          Node ha-797954 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m17s  node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal  NodeReady                6m4s   kubelet          Node ha-797954 status is now: NodeReady
	  Normal  RegisteredNode           5m20s  node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal  RegisteredNode           3m59s  node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	
	
	Name:               ha-797954-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_16_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:16:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:19:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 21:18:37 +0000   Fri, 20 Sep 2024 21:20:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 21:18:37 +0000   Fri, 20 Sep 2024 21:20:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 21:18:37 +0000   Fri, 20 Sep 2024 21:20:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 21:18:37 +0000   Fri, 20 Sep 2024 21:20:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    ha-797954-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b99cf888073c415c8a60d9eed0a33a77
	  System UUID:                b99cf888-073c-415c-8a60-d9eed0a33a77
	  Boot ID:                    9ba91b19-4463-4b79-aaf1-951e8cebdd88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5m4df                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-797954-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m25s
	  kube-system                 kindnet-qh5rt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m28s
	  kube-system                 kube-apiserver-ha-797954-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-ha-797954-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m19s
	  kube-system                 kube-proxy-2pjxq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-scheduler-ha-797954-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-vip-ha-797954-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m28s (x8 over 5m28s)  kubelet          Node ha-797954-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m28s (x8 over 5m28s)  kubelet          Node ha-797954-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m28s (x7 over 5m28s)  kubelet          Node ha-797954-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           5m20s                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  NodeNotReady             114s                   node-controller  Node ha-797954-m02 status is now: NodeNotReady
	
	
	Name:               ha-797954-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_17_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:17:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:21:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:18:23 +0000   Fri, 20 Sep 2024 21:17:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:18:23 +0000   Fri, 20 Sep 2024 21:17:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:18:23 +0000   Fri, 20 Sep 2024 21:17:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:18:23 +0000   Fri, 20 Sep 2024 21:18:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    ha-797954-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 17e0c265db3c48ee8ba8b007e24689e7
	  System UUID:                17e0c265-db3c-48ee-8ba8-b007e24689e7
	  Boot ID:                    0d991bd8-dcf2-4a85-9be2-891290e9f74b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s4w7f                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 etcd-ha-797954-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m7s
	  kube-system                 kindnet-tm8ml                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m9s
	  kube-system                 kube-apiserver-ha-797954-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-controller-manager-ha-797954-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-proxy-jskx5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-ha-797954-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-vip-ha-797954-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 4m3s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m10s)  kubelet          Node ha-797954-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m10s)  kubelet          Node ha-797954-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m10s)  kubelet          Node ha-797954-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m7s                  node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	  Normal  RegisteredNode           4m5s                  node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	  Normal  RegisteredNode           3m59s                 node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	
	
	Name:               ha-797954-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_18_56_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:18:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:22:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:19:26 +0000   Fri, 20 Sep 2024 21:18:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:19:26 +0000   Fri, 20 Sep 2024 21:18:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:19:26 +0000   Fri, 20 Sep 2024 21:18:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:19:26 +0000   Fri, 20 Sep 2024 21:19:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-797954-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 301819962cda48ca9fc543396f387646
	  System UUID:                30181996-2cda-48ca-9fc5-43396f387646
	  Boot ID:                    eb990af5-0bb6-4a23-a507-cf3ea2019e05
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5zw6l       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m6s
	  kube-system                 kube-proxy-ghvv9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m7s)  kubelet          Node ha-797954-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m7s)  kubelet          Node ha-797954-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m7s)  kubelet          Node ha-797954-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-797954-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 21:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051526] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039875] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.837230] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.569523] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.588596] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.856812] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.057896] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062395] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.182824] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.137335] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.286470] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.136041] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +4.054578] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.065217] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.071912] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.088568] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.488875] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.290079] kauditd_printk_skb: 38 callbacks suppressed
	[Sep20 21:16] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e] <==
	{"level":"warn","ts":"2024-09-20T21:22:01.958676Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:01.969135Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:01.973166Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:01.988684Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:01.990890Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:01.994729Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:01.995844Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.002976Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.006842Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.010313Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.016140Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.023264Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.029111Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.033444Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.036647Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.041746Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.048948Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.055977Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.059617Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.063382Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.067148Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.073080Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.079857Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.091320Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:02.144251Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:22:02 up 7 min,  0 users,  load average: 0.17, 0.29, 0.17
	Linux ha-797954 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2] <==
	I0920 21:21:27.976980       1 main.go:299] handling current node
	I0920 21:21:37.972285       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:21:37.972329       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:21:37.972581       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:21:37.972607       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:21:37.972661       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:21:37.972683       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:21:37.972731       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:21:37.972737       1 main.go:299] handling current node
	I0920 21:21:47.971538       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:21:47.971596       1 main.go:299] handling current node
	I0920 21:21:47.971629       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:21:47.971638       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:21:47.971894       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:21:47.971922       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:21:47.971991       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:21:47.972013       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:21:57.976572       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:21:57.976762       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:21:57.976928       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:21:57.976953       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:21:57.977027       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:21:57.977047       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:21:57.977104       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:21:57.977123       1 main.go:299] handling current node
	
	
	==> kube-apiserver [0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674] <==
	W0920 21:15:39.995416       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.240]
	I0920 21:15:39.997201       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 21:15:40.003324       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 21:15:40.250083       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 21:15:41.192926       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 21:15:41.217232       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 21:15:41.227871       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 21:15:45.151589       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0920 21:15:45.903425       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0920 21:18:22.997555       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52530: use of closed network connection
	E0920 21:18:23.183834       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52560: use of closed network connection
	E0920 21:18:23.372686       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52582: use of closed network connection
	E0920 21:18:23.592050       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52590: use of closed network connection
	E0920 21:18:23.773849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55264: use of closed network connection
	E0920 21:18:23.984978       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55286: use of closed network connection
	E0920 21:18:24.179836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55308: use of closed network connection
	E0920 21:18:24.361809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55330: use of closed network connection
	E0920 21:18:24.551154       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55360: use of closed network connection
	E0920 21:18:24.848743       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55398: use of closed network connection
	E0920 21:18:25.024085       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55414: use of closed network connection
	E0920 21:18:25.219294       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55430: use of closed network connection
	E0920 21:18:25.388448       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55444: use of closed network connection
	E0920 21:18:25.573028       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55472: use of closed network connection
	E0920 21:18:25.768039       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55484: use of closed network connection
	W0920 21:19:49.996169       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.108 192.168.39.240]
	
	
	==> kube-controller-manager [3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360] <==
	I0920 21:18:56.149866       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-797954-m04\" does not exist"
	I0920 21:18:56.187860       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-797954-m04" podCIDRs=["10.244.3.0/24"]
	I0920 21:18:56.187915       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:56.187977       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:56.481322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:56.872458       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:57.921726       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:58.169798       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:58.196622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:00.119543       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:00.120039       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-797954-m04"
	I0920 21:19:00.151133       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:06.208726       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:15.212653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:15.212832       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-797954-m04"
	I0920 21:19:15.244613       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:17.869384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:26.358050       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:20:08.200630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m02"
	I0920 21:20:08.200711       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-797954-m04"
	I0920 21:20:08.233657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m02"
	I0920 21:20:08.344363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.289751ms"
	I0920 21:20:08.345202       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="127.016µs"
	I0920 21:20:10.222320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m02"
	I0920 21:20:13.505001       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m02"
	
	
	==> kube-proxy [6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 21:15:47.138996       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 21:15:47.173137       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.240"]
	E0920 21:15:47.173284       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 21:15:47.239418       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 21:15:47.244602       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 21:15:47.244668       1 server_linux.go:169] "Using iptables Proxier"
	I0920 21:15:47.323829       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 21:15:47.324671       1 server.go:483] "Version info" version="v1.31.1"
	I0920 21:15:47.324724       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:15:47.327319       1 config.go:199] "Starting service config controller"
	I0920 21:15:47.329757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 21:15:47.330178       1 config.go:105] "Starting endpoint slice config controller"
	I0920 21:15:47.330205       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 21:15:47.331254       1 config.go:328] "Starting node config controller"
	I0920 21:15:47.331284       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 21:15:47.430176       1 shared_informer.go:320] Caches are synced for service config
	I0920 21:15:47.430270       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 21:15:47.431643       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463] <==
	I0920 21:18:18.810601       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5m4df" node="ha-797954-m02"
	E0920 21:18:18.861661       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-s4w7f\": pod busybox-7dff88458-s4w7f is already assigned to node \"ha-797954-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-s4w7f" node="ha-797954-m03"
	E0920 21:18:18.861738       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5110f648-42f3-47e2-bc5a-4b33d1468080(default/busybox-7dff88458-s4w7f) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-s4w7f"
	E0920 21:18:18.861759       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-s4w7f\": pod busybox-7dff88458-s4w7f is already assigned to node \"ha-797954-m03\"" pod="default/busybox-7dff88458-s4w7f"
	I0920 21:18:18.861781       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-s4w7f" node="ha-797954-m03"
	E0920 21:18:18.864713       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-44q29\": pod busybox-7dff88458-44q29 is already assigned to node \"ha-797954\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-44q29" node="ha-797954"
	E0920 21:18:18.864788       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 443788ae-8329-43bd-819c-a80e37254cf3(default/busybox-7dff88458-44q29) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-44q29"
	E0920 21:18:18.864810       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-44q29\": pod busybox-7dff88458-44q29 is already assigned to node \"ha-797954\"" pod="default/busybox-7dff88458-44q29"
	I0920 21:18:18.864831       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-44q29" node="ha-797954"
	E0920 21:18:56.226900       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5zw6l\": pod kindnet-5zw6l is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5zw6l" node="ha-797954-m04"
	E0920 21:18:56.227110       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a8b2d657-43d8-4e0d-8200-e11fb64ed35c(kube-system/kindnet-5zw6l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5zw6l"
	E0920 21:18:56.227211       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5zw6l\": pod kindnet-5zw6l is already assigned to node \"ha-797954-m04\"" pod="kube-system/kindnet-5zw6l"
	I0920 21:18:56.227382       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5zw6l" node="ha-797954-m04"
	E0920 21:18:56.228063       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvv9\": pod kube-proxy-ghvv9 is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ghvv9" node="ha-797954-m04"
	E0920 21:18:56.234373       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 586765d9-0991-47c1-aa4f-7c82cb8a2e76(kube-system/kube-proxy-ghvv9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ghvv9"
	E0920 21:18:56.234534       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvv9\": pod kube-proxy-ghvv9 is already assigned to node \"ha-797954-m04\"" pod="kube-system/kube-proxy-ghvv9"
	I0920 21:18:56.234608       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ghvv9" node="ha-797954-m04"
	E0920 21:18:56.279871       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4kct8\": pod kindnet-4kct8 is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4kct8" node="ha-797954-m04"
	E0920 21:18:56.280008       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4kct8\": pod kindnet-4kct8 is already assigned to node \"ha-797954-m04\"" pod="kube-system/kindnet-4kct8"
	E0920 21:18:56.280247       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dpjmv\": pod kube-proxy-dpjmv is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dpjmv" node="ha-797954-m04"
	E0920 21:18:56.280445       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dpjmv\": pod kube-proxy-dpjmv is already assigned to node \"ha-797954-m04\"" pod="kube-system/kube-proxy-dpjmv"
	E0920 21:18:56.401044       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-r4kzr\": pod kindnet-r4kzr is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-r4kzr" node="ha-797954-m04"
	E0920 21:18:56.401344       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e53077ca-c9f3-433c-8f93-766336e4a9ab(kube-system/kindnet-r4kzr) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-r4kzr"
	E0920 21:18:56.401460       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-r4kzr\": pod kindnet-r4kzr is already assigned to node \"ha-797954-m04\"" pod="kube-system/kindnet-r4kzr"
	I0920 21:18:56.401669       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-r4kzr" node="ha-797954-m04"
	
	
	==> kubelet <==
	Sep 20 21:20:41 ha-797954 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:20:41 ha-797954 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:20:41 ha-797954 kubelet[1315]: E0920 21:20:41.280669    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867241280234372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:20:41 ha-797954 kubelet[1315]: E0920 21:20:41.280717    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867241280234372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:20:51 ha-797954 kubelet[1315]: E0920 21:20:51.282414    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867251282014880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:20:51 ha-797954 kubelet[1315]: E0920 21:20:51.282759    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867251282014880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:01 ha-797954 kubelet[1315]: E0920 21:21:01.284298    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867261283949905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:01 ha-797954 kubelet[1315]: E0920 21:21:01.284350    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867261283949905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:11 ha-797954 kubelet[1315]: E0920 21:21:11.286306    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867271285944996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:11 ha-797954 kubelet[1315]: E0920 21:21:11.286783    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867271285944996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:21 ha-797954 kubelet[1315]: E0920 21:21:21.289176    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867281288672833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:21 ha-797954 kubelet[1315]: E0920 21:21:21.289509    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867281288672833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:31 ha-797954 kubelet[1315]: E0920 21:21:31.293560    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867291292385143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:31 ha-797954 kubelet[1315]: E0920 21:21:31.293612    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867291292385143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:41 ha-797954 kubelet[1315]: E0920 21:21:41.149114    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 21:21:41 ha-797954 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 21:21:41 ha-797954 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 21:21:41 ha-797954 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:21:41 ha-797954 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:21:41 ha-797954 kubelet[1315]: E0920 21:21:41.295407    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867301295116540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:41 ha-797954 kubelet[1315]: E0920 21:21:41.295545    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867301295116540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:51 ha-797954 kubelet[1315]: E0920 21:21:51.297121    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867311296832439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:51 ha-797954 kubelet[1315]: E0920 21:21:51.297162    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867311296832439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:22:01 ha-797954 kubelet[1315]: E0920 21:22:01.299368    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867321299048597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:22:01 ha-797954 kubelet[1315]: E0920 21:22:01.299417    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867321299048597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-797954 -n ha-797954
helpers_test.go:261: (dbg) Run:  kubectl --context ha-797954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr: (4.004495006s)
ha_test.go:435: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr": 
ha_test.go:438: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr": 
ha_test.go:441: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr": 
ha_test.go:444: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr": 
ha_test.go:448: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-797954 -n ha-797954
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-797954 logs -n 25: (1.420767384s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954:/home/docker/cp-test_ha-797954-m03_ha-797954.txt                       |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954 sudo cat                                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m03_ha-797954.txt                                 |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m02:/home/docker/cp-test_ha-797954-m03_ha-797954-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m02 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m03_ha-797954-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04:/home/docker/cp-test_ha-797954-m03_ha-797954-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m04 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m03_ha-797954-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp testdata/cp-test.txt                                                | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3341505942/001/cp-test_ha-797954-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954:/home/docker/cp-test_ha-797954-m04_ha-797954.txt                       |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954 sudo cat                                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954.txt                                 |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m02:/home/docker/cp-test_ha-797954-m04_ha-797954-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m02 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03:/home/docker/cp-test_ha-797954-m04_ha-797954-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m03 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-797954 node stop m02 -v=7                                                     | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-797954 node start m02 -v=7                                                    | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 21:14:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 21:14:58.093383   29546 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:14:58.093518   29546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:14:58.093529   29546 out.go:358] Setting ErrFile to fd 2...
	I0920 21:14:58.093536   29546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:14:58.093721   29546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 21:14:58.094299   29546 out.go:352] Setting JSON to false
	I0920 21:14:58.095137   29546 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3446,"bootTime":1726863452,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:14:58.095237   29546 start.go:139] virtualization: kvm guest
	I0920 21:14:58.097575   29546 out.go:177] * [ha-797954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 21:14:58.099174   29546 notify.go:220] Checking for updates...
	I0920 21:14:58.099184   29546 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:14:58.100699   29546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:14:58.102318   29546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:14:58.103731   29546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:14:58.105067   29546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:14:58.106539   29546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:14:58.108120   29546 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:14:58.144556   29546 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 21:14:58.146098   29546 start.go:297] selected driver: kvm2
	I0920 21:14:58.146111   29546 start.go:901] validating driver "kvm2" against <nil>
	I0920 21:14:58.146123   29546 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:14:58.146850   29546 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:14:58.146917   29546 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 21:14:58.162003   29546 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 21:14:58.162056   29546 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 21:14:58.162303   29546 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 21:14:58.162329   29546 cni.go:84] Creating CNI manager for ""
	I0920 21:14:58.162373   29546 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0920 21:14:58.162381   29546 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 21:14:58.162430   29546 start.go:340] cluster config:
	{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0920 21:14:58.162516   29546 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:14:58.164481   29546 out.go:177] * Starting "ha-797954" primary control-plane node in "ha-797954" cluster
	I0920 21:14:58.165856   29546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:14:58.165884   29546 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 21:14:58.165902   29546 cache.go:56] Caching tarball of preloaded images
	I0920 21:14:58.165974   29546 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 21:14:58.165984   29546 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 21:14:58.166298   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:14:58.166316   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json: {Name:mk8fa57fb542c3761d64832828491b92c6ccc2fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:14:58.166454   29546 start.go:360] acquireMachinesLock for ha-797954: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 21:14:58.166482   29546 start.go:364] duration metric: took 15.486µs to acquireMachinesLock for "ha-797954"
	I0920 21:14:58.166497   29546 start.go:93] Provisioning new machine with config: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:14:58.166552   29546 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 21:14:58.168264   29546 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 21:14:58.168395   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:14:58.168428   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:14:58.182843   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35083
	I0920 21:14:58.183355   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:14:58.183917   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:14:58.183941   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:14:58.184213   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:14:58.184397   29546 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:14:58.184524   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:14:58.184653   29546 start.go:159] libmachine.API.Create for "ha-797954" (driver="kvm2")
	I0920 21:14:58.184689   29546 client.go:168] LocalClient.Create starting
	I0920 21:14:58.184716   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 21:14:58.184749   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:14:58.184762   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:14:58.184806   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 21:14:58.184823   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:14:58.184836   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:14:58.184851   29546 main.go:141] libmachine: Running pre-create checks...
	I0920 21:14:58.184863   29546 main.go:141] libmachine: (ha-797954) Calling .PreCreateCheck
	I0920 21:14:58.185178   29546 main.go:141] libmachine: (ha-797954) Calling .GetConfigRaw
	I0920 21:14:58.185555   29546 main.go:141] libmachine: Creating machine...
	I0920 21:14:58.185568   29546 main.go:141] libmachine: (ha-797954) Calling .Create
	I0920 21:14:58.185703   29546 main.go:141] libmachine: (ha-797954) Creating KVM machine...
	I0920 21:14:58.186926   29546 main.go:141] libmachine: (ha-797954) DBG | found existing default KVM network
	I0920 21:14:58.187682   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.187531   29569 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I0920 21:14:58.187702   29546 main.go:141] libmachine: (ha-797954) DBG | created network xml: 
	I0920 21:14:58.187715   29546 main.go:141] libmachine: (ha-797954) DBG | <network>
	I0920 21:14:58.187726   29546 main.go:141] libmachine: (ha-797954) DBG |   <name>mk-ha-797954</name>
	I0920 21:14:58.187736   29546 main.go:141] libmachine: (ha-797954) DBG |   <dns enable='no'/>
	I0920 21:14:58.187745   29546 main.go:141] libmachine: (ha-797954) DBG |   
	I0920 21:14:58.187755   29546 main.go:141] libmachine: (ha-797954) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0920 21:14:58.187763   29546 main.go:141] libmachine: (ha-797954) DBG |     <dhcp>
	I0920 21:14:58.187769   29546 main.go:141] libmachine: (ha-797954) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0920 21:14:58.187775   29546 main.go:141] libmachine: (ha-797954) DBG |     </dhcp>
	I0920 21:14:58.187783   29546 main.go:141] libmachine: (ha-797954) DBG |   </ip>
	I0920 21:14:58.187799   29546 main.go:141] libmachine: (ha-797954) DBG |   
	I0920 21:14:58.187840   29546 main.go:141] libmachine: (ha-797954) DBG | </network>
	I0920 21:14:58.187855   29546 main.go:141] libmachine: (ha-797954) DBG | 
	I0920 21:14:58.192877   29546 main.go:141] libmachine: (ha-797954) DBG | trying to create private KVM network mk-ha-797954 192.168.39.0/24...
	I0920 21:14:58.259633   29546 main.go:141] libmachine: (ha-797954) DBG | private KVM network mk-ha-797954 192.168.39.0/24 created
	I0920 21:14:58.259663   29546 main.go:141] libmachine: (ha-797954) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954 ...
	I0920 21:14:58.259676   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.259606   29569 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:14:58.259694   29546 main.go:141] libmachine: (ha-797954) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 21:14:58.259758   29546 main.go:141] libmachine: (ha-797954) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 21:14:58.505872   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.505739   29569 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa...
	I0920 21:14:58.596789   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.596635   29569 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/ha-797954.rawdisk...
	I0920 21:14:58.596822   29546 main.go:141] libmachine: (ha-797954) DBG | Writing magic tar header
	I0920 21:14:58.596834   29546 main.go:141] libmachine: (ha-797954) DBG | Writing SSH key tar header
	I0920 21:14:58.596844   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:58.596754   29569 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954 ...
	I0920 21:14:58.596858   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954
	I0920 21:14:58.596904   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954 (perms=drwx------)
	I0920 21:14:58.596921   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 21:14:58.596928   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 21:14:58.596967   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:14:58.596981   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 21:14:58.596994   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 21:14:58.597010   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 21:14:58.597027   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 21:14:58.597042   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 21:14:58.597053   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home/jenkins
	I0920 21:14:58.597060   29546 main.go:141] libmachine: (ha-797954) DBG | Checking permissions on dir: /home
	I0920 21:14:58.597068   29546 main.go:141] libmachine: (ha-797954) DBG | Skipping /home - not owner
	I0920 21:14:58.597077   29546 main.go:141] libmachine: (ha-797954) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 21:14:58.597083   29546 main.go:141] libmachine: (ha-797954) Creating domain...
	I0920 21:14:58.598143   29546 main.go:141] libmachine: (ha-797954) define libvirt domain using xml: 
	I0920 21:14:58.598171   29546 main.go:141] libmachine: (ha-797954) <domain type='kvm'>
	I0920 21:14:58.598181   29546 main.go:141] libmachine: (ha-797954)   <name>ha-797954</name>
	I0920 21:14:58.598192   29546 main.go:141] libmachine: (ha-797954)   <memory unit='MiB'>2200</memory>
	I0920 21:14:58.598202   29546 main.go:141] libmachine: (ha-797954)   <vcpu>2</vcpu>
	I0920 21:14:58.598215   29546 main.go:141] libmachine: (ha-797954)   <features>
	I0920 21:14:58.598243   29546 main.go:141] libmachine: (ha-797954)     <acpi/>
	I0920 21:14:58.598261   29546 main.go:141] libmachine: (ha-797954)     <apic/>
	I0920 21:14:58.598269   29546 main.go:141] libmachine: (ha-797954)     <pae/>
	I0920 21:14:58.598292   29546 main.go:141] libmachine: (ha-797954)     
	I0920 21:14:58.598311   29546 main.go:141] libmachine: (ha-797954)   </features>
	I0920 21:14:58.598326   29546 main.go:141] libmachine: (ha-797954)   <cpu mode='host-passthrough'>
	I0920 21:14:58.598354   29546 main.go:141] libmachine: (ha-797954)   
	I0920 21:14:58.598373   29546 main.go:141] libmachine: (ha-797954)   </cpu>
	I0920 21:14:58.598388   29546 main.go:141] libmachine: (ha-797954)   <os>
	I0920 21:14:58.598403   29546 main.go:141] libmachine: (ha-797954)     <type>hvm</type>
	I0920 21:14:58.598417   29546 main.go:141] libmachine: (ha-797954)     <boot dev='cdrom'/>
	I0920 21:14:58.598424   29546 main.go:141] libmachine: (ha-797954)     <boot dev='hd'/>
	I0920 21:14:58.598430   29546 main.go:141] libmachine: (ha-797954)     <bootmenu enable='no'/>
	I0920 21:14:58.598435   29546 main.go:141] libmachine: (ha-797954)   </os>
	I0920 21:14:58.598441   29546 main.go:141] libmachine: (ha-797954)   <devices>
	I0920 21:14:58.598451   29546 main.go:141] libmachine: (ha-797954)     <disk type='file' device='cdrom'>
	I0920 21:14:58.598467   29546 main.go:141] libmachine: (ha-797954)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/boot2docker.iso'/>
	I0920 21:14:58.598482   29546 main.go:141] libmachine: (ha-797954)       <target dev='hdc' bus='scsi'/>
	I0920 21:14:58.598498   29546 main.go:141] libmachine: (ha-797954)       <readonly/>
	I0920 21:14:58.598520   29546 main.go:141] libmachine: (ha-797954)     </disk>
	I0920 21:14:58.598530   29546 main.go:141] libmachine: (ha-797954)     <disk type='file' device='disk'>
	I0920 21:14:58.598537   29546 main.go:141] libmachine: (ha-797954)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 21:14:58.598552   29546 main.go:141] libmachine: (ha-797954)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/ha-797954.rawdisk'/>
	I0920 21:14:58.598567   29546 main.go:141] libmachine: (ha-797954)       <target dev='hda' bus='virtio'/>
	I0920 21:14:58.598578   29546 main.go:141] libmachine: (ha-797954)     </disk>
	I0920 21:14:58.598592   29546 main.go:141] libmachine: (ha-797954)     <interface type='network'>
	I0920 21:14:58.598603   29546 main.go:141] libmachine: (ha-797954)       <source network='mk-ha-797954'/>
	I0920 21:14:58.598612   29546 main.go:141] libmachine: (ha-797954)       <model type='virtio'/>
	I0920 21:14:58.598623   29546 main.go:141] libmachine: (ha-797954)     </interface>
	I0920 21:14:58.598631   29546 main.go:141] libmachine: (ha-797954)     <interface type='network'>
	I0920 21:14:58.598639   29546 main.go:141] libmachine: (ha-797954)       <source network='default'/>
	I0920 21:14:58.598650   29546 main.go:141] libmachine: (ha-797954)       <model type='virtio'/>
	I0920 21:14:58.598658   29546 main.go:141] libmachine: (ha-797954)     </interface>
	I0920 21:14:58.598668   29546 main.go:141] libmachine: (ha-797954)     <serial type='pty'>
	I0920 21:14:58.598676   29546 main.go:141] libmachine: (ha-797954)       <target port='0'/>
	I0920 21:14:58.598685   29546 main.go:141] libmachine: (ha-797954)     </serial>
	I0920 21:14:58.598696   29546 main.go:141] libmachine: (ha-797954)     <console type='pty'>
	I0920 21:14:58.598706   29546 main.go:141] libmachine: (ha-797954)       <target type='serial' port='0'/>
	I0920 21:14:58.598718   29546 main.go:141] libmachine: (ha-797954)     </console>
	I0920 21:14:58.598725   29546 main.go:141] libmachine: (ha-797954)     <rng model='virtio'>
	I0920 21:14:58.598738   29546 main.go:141] libmachine: (ha-797954)       <backend model='random'>/dev/random</backend>
	I0920 21:14:58.598754   29546 main.go:141] libmachine: (ha-797954)     </rng>
	I0920 21:14:58.598766   29546 main.go:141] libmachine: (ha-797954)     
	I0920 21:14:58.598780   29546 main.go:141] libmachine: (ha-797954)     
	I0920 21:14:58.598790   29546 main.go:141] libmachine: (ha-797954)   </devices>
	I0920 21:14:58.598799   29546 main.go:141] libmachine: (ha-797954) </domain>
	I0920 21:14:58.598807   29546 main.go:141] libmachine: (ha-797954) 
	I0920 21:14:58.603113   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:c6:03:52 in network default
	I0920 21:14:58.603707   29546 main.go:141] libmachine: (ha-797954) Ensuring networks are active...
	I0920 21:14:58.603742   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:14:58.604393   29546 main.go:141] libmachine: (ha-797954) Ensuring network default is active
	I0920 21:14:58.604660   29546 main.go:141] libmachine: (ha-797954) Ensuring network mk-ha-797954 is active
	I0920 21:14:58.605088   29546 main.go:141] libmachine: (ha-797954) Getting domain xml...
	I0920 21:14:58.605647   29546 main.go:141] libmachine: (ha-797954) Creating domain...
	I0920 21:14:59.802448   29546 main.go:141] libmachine: (ha-797954) Waiting to get IP...
	I0920 21:14:59.803386   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:14:59.803845   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:14:59.803891   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:14:59.803834   29569 retry.go:31] will retry after 223.587991ms: waiting for machine to come up
	I0920 21:15:00.029437   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:00.029858   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:00.029879   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:00.029830   29569 retry.go:31] will retry after 354.177356ms: waiting for machine to come up
	I0920 21:15:00.385460   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:00.385795   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:00.385813   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:00.385762   29569 retry.go:31] will retry after 379.865741ms: waiting for machine to come up
	I0920 21:15:00.767361   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:00.767813   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:00.767841   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:00.767778   29569 retry.go:31] will retry after 505.320471ms: waiting for machine to come up
	I0920 21:15:01.274395   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:01.274767   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:01.274801   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:01.274739   29569 retry.go:31] will retry after 590.453192ms: waiting for machine to come up
	I0920 21:15:01.866507   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:01.866905   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:01.866927   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:01.866860   29569 retry.go:31] will retry after 645.79905ms: waiting for machine to come up
	I0920 21:15:02.514921   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:02.515518   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:02.515553   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:02.515465   29569 retry.go:31] will retry after 946.798883ms: waiting for machine to come up
	I0920 21:15:03.464250   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:03.464679   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:03.464704   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:03.464640   29569 retry.go:31] will retry after 983.162198ms: waiting for machine to come up
	I0920 21:15:04.450382   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:04.450835   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:04.450858   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:04.450809   29569 retry.go:31] will retry after 1.725909532s: waiting for machine to come up
	I0920 21:15:06.178808   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:06.179200   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:06.179226   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:06.179150   29569 retry.go:31] will retry after 2.07990615s: waiting for machine to come up
	I0920 21:15:08.260549   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:08.261003   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:08.261030   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:08.260968   29569 retry.go:31] will retry after 2.83157909s: waiting for machine to come up
	I0920 21:15:11.093814   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:11.094191   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:11.094225   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:11.094158   29569 retry.go:31] will retry after 3.472040975s: waiting for machine to come up
	I0920 21:15:14.569242   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:14.569658   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:14.569683   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:14.569603   29569 retry.go:31] will retry after 2.791417275s: waiting for machine to come up
	I0920 21:15:17.364625   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:17.365099   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find current IP address of domain ha-797954 in network mk-ha-797954
	I0920 21:15:17.365116   29546 main.go:141] libmachine: (ha-797954) DBG | I0920 21:15:17.365066   29569 retry.go:31] will retry after 5.415425974s: waiting for machine to come up
	I0920 21:15:22.785521   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.786004   29546 main.go:141] libmachine: (ha-797954) Found IP for machine: 192.168.39.240
	I0920 21:15:22.786028   29546 main.go:141] libmachine: (ha-797954) Reserving static IP address...
	I0920 21:15:22.786041   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has current primary IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.786330   29546 main.go:141] libmachine: (ha-797954) DBG | unable to find host DHCP lease matching {name: "ha-797954", mac: "52:54:00:62:bb:49", ip: "192.168.39.240"} in network mk-ha-797954
	I0920 21:15:22.857285   29546 main.go:141] libmachine: (ha-797954) Reserved static IP address: 192.168.39.240
	I0920 21:15:22.857326   29546 main.go:141] libmachine: (ha-797954) Waiting for SSH to be available...
	I0920 21:15:22.857348   29546 main.go:141] libmachine: (ha-797954) DBG | Getting to WaitForSSH function...
	I0920 21:15:22.859900   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.860220   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:22.860240   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.860390   29546 main.go:141] libmachine: (ha-797954) DBG | Using SSH client type: external
	I0920 21:15:22.860414   29546 main.go:141] libmachine: (ha-797954) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa (-rw-------)
	I0920 21:15:22.860439   29546 main.go:141] libmachine: (ha-797954) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.240 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:15:22.860456   29546 main.go:141] libmachine: (ha-797954) DBG | About to run SSH command:
	I0920 21:15:22.860469   29546 main.go:141] libmachine: (ha-797954) DBG | exit 0
	I0920 21:15:22.987393   29546 main.go:141] libmachine: (ha-797954) DBG | SSH cmd err, output: <nil>: 
	I0920 21:15:22.987618   29546 main.go:141] libmachine: (ha-797954) KVM machine creation complete!
	I0920 21:15:22.987926   29546 main.go:141] libmachine: (ha-797954) Calling .GetConfigRaw
	I0920 21:15:22.988498   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:22.988678   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:22.988799   29546 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 21:15:22.988815   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:15:22.990056   29546 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 21:15:22.990073   29546 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 21:15:22.990078   29546 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 21:15:22.990084   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:22.992361   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.992791   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:22.992813   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:22.992947   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:22.993123   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:22.993226   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:22.993339   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:22.993444   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:22.993635   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:22.993647   29546 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 21:15:23.098405   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:15:23.098428   29546 main.go:141] libmachine: Detecting the provisioner...
	I0920 21:15:23.098437   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.101054   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.101432   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.101460   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.101637   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.101840   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.101988   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.102095   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.102248   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:23.102456   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:23.102471   29546 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 21:15:23.207907   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 21:15:23.208033   29546 main.go:141] libmachine: found compatible host: buildroot
	I0920 21:15:23.208051   29546 main.go:141] libmachine: Provisioning with buildroot...
	I0920 21:15:23.208064   29546 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:15:23.208298   29546 buildroot.go:166] provisioning hostname "ha-797954"
	I0920 21:15:23.208319   29546 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:15:23.208502   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.211194   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.211511   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.211538   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.211661   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.211829   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.212009   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.212107   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.212272   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:23.212442   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:23.212456   29546 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-797954 && echo "ha-797954" | sudo tee /etc/hostname
	I0920 21:15:23.333898   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-797954
	
	I0920 21:15:23.333955   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.336557   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.337630   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.337652   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.337927   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.338114   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.338300   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.338451   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.338601   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:23.338766   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:23.338781   29546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-797954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-797954/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-797954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 21:15:23.457012   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:15:23.457045   29546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 21:15:23.457068   29546 buildroot.go:174] setting up certificates
	I0920 21:15:23.457076   29546 provision.go:84] configureAuth start
	I0920 21:15:23.457084   29546 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:15:23.457358   29546 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:15:23.460274   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.460523   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.460544   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.460743   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.463346   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.463695   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.463721   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.463823   29546 provision.go:143] copyHostCerts
	I0920 21:15:23.463873   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:15:23.463907   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 21:15:23.463922   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:15:23.464005   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 21:15:23.464111   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:15:23.464141   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 21:15:23.464151   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:15:23.464195   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 21:15:23.464272   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:15:23.464310   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 21:15:23.464319   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:15:23.464355   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 21:15:23.464469   29546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.ha-797954 san=[127.0.0.1 192.168.39.240 ha-797954 localhost minikube]
	I0920 21:15:23.729781   29546 provision.go:177] copyRemoteCerts
	I0920 21:15:23.729854   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 21:15:23.729878   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.732636   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.733071   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.733098   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.733347   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.733569   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.733717   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.733849   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:23.819565   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 21:15:23.819631   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 21:15:23.849592   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 21:15:23.849659   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 21:15:23.875769   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 21:15:23.875849   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 21:15:23.901303   29546 provision.go:87] duration metric: took 444.215684ms to configureAuth
	I0920 21:15:23.901335   29546 buildroot.go:189] setting minikube options for container-runtime
	I0920 21:15:23.901537   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:15:23.901620   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:23.904924   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.905466   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:23.905493   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:23.905749   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:23.905969   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.906138   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:23.906294   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:23.906464   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:23.906655   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:23.906678   29546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 21:15:24.134358   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 21:15:24.134386   29546 main.go:141] libmachine: Checking connection to Docker...
	I0920 21:15:24.134394   29546 main.go:141] libmachine: (ha-797954) Calling .GetURL
	I0920 21:15:24.135777   29546 main.go:141] libmachine: (ha-797954) DBG | Using libvirt version 6000000
	I0920 21:15:24.138158   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.138556   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.138587   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.138722   29546 main.go:141] libmachine: Docker is up and running!
	I0920 21:15:24.138737   29546 main.go:141] libmachine: Reticulating splines...
	I0920 21:15:24.138745   29546 client.go:171] duration metric: took 25.95404705s to LocalClient.Create
	I0920 21:15:24.138772   29546 start.go:167] duration metric: took 25.954119175s to libmachine.API.Create "ha-797954"
	I0920 21:15:24.138783   29546 start.go:293] postStartSetup for "ha-797954" (driver="kvm2")
	I0920 21:15:24.138798   29546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 21:15:24.138818   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.139051   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 21:15:24.139076   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:24.140985   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.141346   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.141374   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.141499   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:24.141674   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.141828   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:24.142019   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:24.225959   29546 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 21:15:24.230124   29546 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 21:15:24.230144   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 21:15:24.230206   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 21:15:24.230307   29546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 21:15:24.230322   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /etc/ssl/certs/167852.pem
	I0920 21:15:24.230441   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 21:15:24.240575   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:15:24.264696   29546 start.go:296] duration metric: took 125.900056ms for postStartSetup
	I0920 21:15:24.264741   29546 main.go:141] libmachine: (ha-797954) Calling .GetConfigRaw
	I0920 21:15:24.265475   29546 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:15:24.267916   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.268228   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.268259   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.268458   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:15:24.268638   29546 start.go:128] duration metric: took 26.102077231s to createHost
	I0920 21:15:24.268658   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:24.270797   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.271111   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.271130   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.271302   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:24.271452   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.271594   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.271702   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:24.271838   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:15:24.272013   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:15:24.272027   29546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 21:15:24.379993   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726866924.358104924
	
	I0920 21:15:24.380018   29546 fix.go:216] guest clock: 1726866924.358104924
	I0920 21:15:24.380025   29546 fix.go:229] Guest: 2024-09-20 21:15:24.358104924 +0000 UTC Remote: 2024-09-20 21:15:24.268649465 +0000 UTC m=+26.209850110 (delta=89.455459ms)
	I0920 21:15:24.380055   29546 fix.go:200] guest clock delta is within tolerance: 89.455459ms
	I0920 21:15:24.380059   29546 start.go:83] releasing machines lock for "ha-797954", held for 26.213569983s
	I0920 21:15:24.380077   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.380328   29546 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:15:24.382878   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.383258   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.383280   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.383441   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.383944   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.384137   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:24.384211   29546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 21:15:24.384265   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:24.384357   29546 ssh_runner.go:195] Run: cat /version.json
	I0920 21:15:24.384382   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:24.386758   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.386983   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.387156   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.387178   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.387338   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:24.387455   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:24.387479   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:24.387524   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.387629   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:24.387698   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:24.387766   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:24.387832   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:24.387879   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:24.388006   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:24.464447   29546 ssh_runner.go:195] Run: systemctl --version
	I0920 21:15:24.489896   29546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 21:15:24.649570   29546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 21:15:24.656389   29546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 21:15:24.656455   29546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 21:15:24.672721   29546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 21:15:24.672744   29546 start.go:495] detecting cgroup driver to use...
	I0920 21:15:24.672817   29546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 21:15:24.689886   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 21:15:24.704352   29546 docker.go:217] disabling cri-docker service (if available) ...
	I0920 21:15:24.704400   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 21:15:24.718653   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 21:15:24.733865   29546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 21:15:24.855165   29546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 21:15:25.006541   29546 docker.go:233] disabling docker service ...
	I0920 21:15:25.006639   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 21:15:25.021286   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 21:15:25.034719   29546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 21:15:25.179842   29546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 21:15:25.303409   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 21:15:25.317836   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 21:15:25.337433   29546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 21:15:25.337507   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.348743   29546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 21:15:25.348803   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.359907   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.370733   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.381300   29546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 21:15:25.392436   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.403057   29546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.420657   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:15:25.430817   29546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 21:15:25.440277   29546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 21:15:25.440345   29546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 21:15:25.453579   29546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 21:15:25.463440   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:15:25.581550   29546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 21:15:25.681183   29546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 21:15:25.681259   29546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 21:15:25.686062   29546 start.go:563] Will wait 60s for crictl version
	I0920 21:15:25.686107   29546 ssh_runner.go:195] Run: which crictl
	I0920 21:15:25.690069   29546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 21:15:25.733819   29546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 21:15:25.733901   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:15:25.764147   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:15:25.795214   29546 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 21:15:25.796911   29546 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:15:25.799674   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:25.800015   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:25.800052   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:25.800321   29546 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 21:15:25.804808   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:15:25.818394   29546 kubeadm.go:883] updating cluster {Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 21:15:25.818488   29546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:15:25.818527   29546 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:15:25.850610   29546 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 21:15:25.850709   29546 ssh_runner.go:195] Run: which lz4
	I0920 21:15:25.854671   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0920 21:15:25.854769   29546 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 21:15:25.859047   29546 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 21:15:25.859084   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 21:15:27.274564   29546 crio.go:462] duration metric: took 1.41982381s to copy over tarball
	I0920 21:15:27.274662   29546 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 21:15:29.326906   29546 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.052207909s)
	I0920 21:15:29.326937   29546 crio.go:469] duration metric: took 2.052348508s to extract the tarball
	I0920 21:15:29.326946   29546 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 21:15:29.364795   29546 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:15:29.420778   29546 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 21:15:29.420807   29546 cache_images.go:84] Images are preloaded, skipping loading
	I0920 21:15:29.420830   29546 kubeadm.go:934] updating node { 192.168.39.240 8443 v1.31.1 crio true true} ...
	I0920 21:15:29.420939   29546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-797954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 21:15:29.421019   29546 ssh_runner.go:195] Run: crio config
	I0920 21:15:29.467359   29546 cni.go:84] Creating CNI manager for ""
	I0920 21:15:29.467388   29546 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 21:15:29.467398   29546 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 21:15:29.467423   29546 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-797954 NodeName:ha-797954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 21:15:29.467586   29546 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-797954"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 21:15:29.467615   29546 kube-vip.go:115] generating kube-vip config ...
	I0920 21:15:29.467665   29546 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 21:15:29.486323   29546 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 21:15:29.486445   29546 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0920 21:15:29.486507   29546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 21:15:29.497349   29546 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 21:15:29.497421   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 21:15:29.507626   29546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 21:15:29.524821   29546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 21:15:29.542162   29546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 21:15:29.559146   29546 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0920 21:15:29.575893   29546 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 21:15:29.579898   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:15:29.592448   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:15:29.726203   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:15:29.743307   29546 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954 for IP: 192.168.39.240
	I0920 21:15:29.743326   29546 certs.go:194] generating shared ca certs ...
	I0920 21:15:29.743341   29546 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.743501   29546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 21:15:29.743558   29546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 21:15:29.743572   29546 certs.go:256] generating profile certs ...
	I0920 21:15:29.743620   29546 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key
	I0920 21:15:29.743648   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt with IP's: []
	I0920 21:15:29.822848   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt ...
	I0920 21:15:29.822877   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt: {Name:mk9531e4d2ca25e31047a503b6559591e4241e84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.823097   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key ...
	I0920 21:15:29.823113   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key: {Name:mk780710aa75c203f633d5fd9de901af5921b430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.823230   29546 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.e66f4b1a
	I0920 21:15:29.823254   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.e66f4b1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.254]
	I0920 21:15:29.947576   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.e66f4b1a ...
	I0920 21:15:29.947608   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.e66f4b1a: {Name:mk184aed753f9fcdadd87be1874f607263a47d89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.947790   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.e66f4b1a ...
	I0920 21:15:29.947809   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.e66f4b1a: {Name:mke29d551b425a7cf297f097b7c968e0ec8e9532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:29.947904   29546 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.e66f4b1a -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt
	I0920 21:15:29.948003   29546 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.e66f4b1a -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key
	I0920 21:15:29.948077   29546 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key
	I0920 21:15:29.948097   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt with IP's: []
	I0920 21:15:30.056624   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt ...
	I0920 21:15:30.056653   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt: {Name:mk2dc3b0b0385ac4b3979e15bd90a0ff1e5ef2a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:30.056825   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key ...
	I0920 21:15:30.056842   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key: {Name:mkbe53da9f2669d754e44e4038f6b341d45e8e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:30.056936   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 21:15:30.056960   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 21:15:30.056983   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 21:15:30.057001   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 21:15:30.057014   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 21:15:30.057028   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 21:15:30.057044   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 21:15:30.057067   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 21:15:30.057124   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 21:15:30.057165   29546 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 21:15:30.057179   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 21:15:30.057219   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 21:15:30.057249   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 21:15:30.057279   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 21:15:30.057335   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:15:30.057375   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /usr/share/ca-certificates/167852.pem
	I0920 21:15:30.057397   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:15:30.057417   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem -> /usr/share/ca-certificates/16785.pem
	I0920 21:15:30.058000   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 21:15:30.084548   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 21:15:30.110347   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 21:15:30.134953   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 21:15:30.160708   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 21:15:30.184711   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 21:15:30.208901   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 21:15:30.233498   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 21:15:30.258374   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 21:15:30.283305   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 21:15:30.307934   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 21:15:30.331963   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 21:15:30.348398   29546 ssh_runner.go:195] Run: openssl version
	I0920 21:15:30.353975   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 21:15:30.365091   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 21:15:30.369663   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:15:30.369715   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 21:15:30.375452   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 21:15:30.386174   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 21:15:30.397261   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:15:30.401699   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:15:30.401757   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:15:30.407395   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 21:15:30.418060   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 21:15:30.429189   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 21:15:30.434030   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:15:30.434098   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 21:15:30.440220   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 21:15:30.451523   29546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:15:30.455873   29546 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 21:15:30.455921   29546 kubeadm.go:392] StartCluster: {Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:15:30.455994   29546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 21:15:30.456043   29546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 21:15:30.494216   29546 cri.go:89] found id: ""
	I0920 21:15:30.494275   29546 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 21:15:30.504666   29546 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 21:15:30.517367   29546 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 21:15:30.527919   29546 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 21:15:30.527949   29546 kubeadm.go:157] found existing configuration files:
	
	I0920 21:15:30.528010   29546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 21:15:30.538805   29546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 21:15:30.538875   29546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 21:15:30.550872   29546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 21:15:30.562292   29546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 21:15:30.562344   29546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 21:15:30.573641   29546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 21:15:30.586772   29546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 21:15:30.586843   29546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 21:15:30.596548   29546 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 21:15:30.605570   29546 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 21:15:30.605630   29546 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 21:15:30.615094   29546 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 21:15:30.716781   29546 kubeadm.go:310] W0920 21:15:30.700841     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 21:15:30.717889   29546 kubeadm.go:310] W0920 21:15:30.702167     839 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 21:15:30.826716   29546 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 21:15:41.799746   29546 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 21:15:41.799822   29546 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 21:15:41.799918   29546 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 21:15:41.800049   29546 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 21:15:41.800185   29546 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 21:15:41.800260   29546 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 21:15:41.801703   29546 out.go:235]   - Generating certificates and keys ...
	I0920 21:15:41.801811   29546 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 21:15:41.801875   29546 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 21:15:41.801961   29546 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 21:15:41.802058   29546 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 21:15:41.802140   29546 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 21:15:41.802227   29546 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 21:15:41.802292   29546 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 21:15:41.802387   29546 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-797954 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0920 21:15:41.802435   29546 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 21:15:41.802542   29546 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-797954 localhost] and IPs [192.168.39.240 127.0.0.1 ::1]
	I0920 21:15:41.802632   29546 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 21:15:41.802717   29546 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 21:15:41.802782   29546 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 21:15:41.802850   29546 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 21:15:41.802936   29546 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 21:15:41.803041   29546 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 21:15:41.803118   29546 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 21:15:41.803204   29546 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 21:15:41.803284   29546 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 21:15:41.803389   29546 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 21:15:41.803478   29546 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 21:15:41.804945   29546 out.go:235]   - Booting up control plane ...
	I0920 21:15:41.805074   29546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 21:15:41.805179   29546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 21:15:41.805284   29546 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 21:15:41.805403   29546 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 21:15:41.805511   29546 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 21:15:41.805574   29546 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 21:15:41.805717   29546 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 21:15:41.805855   29546 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 21:15:41.805933   29546 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.138308ms
	I0920 21:15:41.806037   29546 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 21:15:41.806112   29546 kubeadm.go:310] [api-check] The API server is healthy after 6.067641334s
	I0920 21:15:41.806250   29546 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 21:15:41.806400   29546 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 21:15:41.806458   29546 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 21:15:41.806622   29546 kubeadm.go:310] [mark-control-plane] Marking the node ha-797954 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 21:15:41.806712   29546 kubeadm.go:310] [bootstrap-token] Using token: 5kg5kj.1p3ye5r3aa4bs2us
	I0920 21:15:41.808141   29546 out.go:235]   - Configuring RBAC rules ...
	I0920 21:15:41.808280   29546 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 21:15:41.808399   29546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 21:15:41.808574   29546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 21:15:41.808752   29546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 21:15:41.808897   29546 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 21:15:41.809018   29546 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 21:15:41.809193   29546 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 21:15:41.809271   29546 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 21:15:41.809343   29546 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 21:15:41.809355   29546 kubeadm.go:310] 
	I0920 21:15:41.809445   29546 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 21:15:41.809464   29546 kubeadm.go:310] 
	I0920 21:15:41.809580   29546 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 21:15:41.809592   29546 kubeadm.go:310] 
	I0920 21:15:41.809629   29546 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 21:15:41.809688   29546 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 21:15:41.809745   29546 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 21:15:41.809754   29546 kubeadm.go:310] 
	I0920 21:15:41.809832   29546 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 21:15:41.809843   29546 kubeadm.go:310] 
	I0920 21:15:41.809916   29546 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 21:15:41.809926   29546 kubeadm.go:310] 
	I0920 21:15:41.810006   29546 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 21:15:41.810120   29546 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 21:15:41.810222   29546 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 21:15:41.810233   29546 kubeadm.go:310] 
	I0920 21:15:41.810391   29546 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 21:15:41.810518   29546 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 21:15:41.810529   29546 kubeadm.go:310] 
	I0920 21:15:41.810655   29546 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5kg5kj.1p3ye5r3aa4bs2us \
	I0920 21:15:41.810800   29546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 21:15:41.810821   29546 kubeadm.go:310] 	--control-plane 
	I0920 21:15:41.810824   29546 kubeadm.go:310] 
	I0920 21:15:41.810951   29546 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 21:15:41.810988   29546 kubeadm.go:310] 
	I0920 21:15:41.811099   29546 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5kg5kj.1p3ye5r3aa4bs2us \
	I0920 21:15:41.811201   29546 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 21:15:41.811212   29546 cni.go:84] Creating CNI manager for ""
	I0920 21:15:41.811218   29546 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0920 21:15:41.812792   29546 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 21:15:41.814273   29546 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 21:15:41.819866   29546 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 21:15:41.819883   29546 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 21:15:41.845113   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 21:15:42.242872   29546 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 21:15:42.242979   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:42.243015   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-797954 minikube.k8s.io/updated_at=2024_09_20T21_15_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=ha-797954 minikube.k8s.io/primary=true
	I0920 21:15:42.278969   29546 ops.go:34] apiserver oom_adj: -16
	I0920 21:15:42.447710   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:42.948122   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:43.448766   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:43.948784   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:44.448608   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:44.948709   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:45.448374   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:45.948002   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 21:15:46.073838   29546 kubeadm.go:1113] duration metric: took 3.830954868s to wait for elevateKubeSystemPrivileges
	I0920 21:15:46.073874   29546 kubeadm.go:394] duration metric: took 15.617956047s to StartCluster
	I0920 21:15:46.073900   29546 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:46.073984   29546 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:15:46.074611   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:15:46.074804   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 21:15:46.074835   29546 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:15:46.074859   29546 start.go:241] waiting for startup goroutines ...
	I0920 21:15:46.074866   29546 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 21:15:46.074942   29546 addons.go:69] Setting storage-provisioner=true in profile "ha-797954"
	I0920 21:15:46.074955   29546 addons.go:234] Setting addon storage-provisioner=true in "ha-797954"
	I0920 21:15:46.074983   29546 addons.go:69] Setting default-storageclass=true in profile "ha-797954"
	I0920 21:15:46.075001   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:15:46.075002   29546 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-797954"
	I0920 21:15:46.075035   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:15:46.075376   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.075406   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.075455   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.075491   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.090209   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39709
	I0920 21:15:46.090325   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45781
	I0920 21:15:46.090697   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.090729   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.091211   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.091226   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.091404   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.091444   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.091507   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.091772   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.091914   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:15:46.092087   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.092132   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.094002   29546 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:15:46.094264   29546 kapi.go:59] client config for ha-797954: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key", CAFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 21:15:46.094718   29546 cert_rotation.go:140] Starting client certificate rotation controller
	I0920 21:15:46.094872   29546 addons.go:234] Setting addon default-storageclass=true in "ha-797954"
	I0920 21:15:46.094901   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:15:46.095181   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.095211   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.107218   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36333
	I0920 21:15:46.107712   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.108266   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.108304   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.108613   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.108761   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:15:46.109409   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35265
	I0920 21:15:46.109837   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.110315   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.110337   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.110525   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:46.110662   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.111094   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:46.111132   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:46.112949   29546 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 21:15:46.114785   29546 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 21:15:46.114809   29546 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 21:15:46.114835   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:46.118672   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:46.119149   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:46.119203   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:46.119483   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:46.119680   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:46.119847   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:46.119996   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:46.128013   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37989
	I0920 21:15:46.128448   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:46.128883   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:46.128912   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:46.129281   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:46.129455   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:15:46.130827   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:15:46.131041   29546 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 21:15:46.131057   29546 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 21:15:46.131075   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:15:46.133356   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:46.133727   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:15:46.133763   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:15:46.133887   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:15:46.134032   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:15:46.134204   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:15:46.134338   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:15:46.285450   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 21:15:46.287399   29546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 21:15:46.336627   29546 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 21:15:46.881947   29546 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0920 21:15:47.160811   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.160830   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.160889   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.160910   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.161142   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.161165   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.161176   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.161184   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.161272   29546 main.go:141] libmachine: (ha-797954) DBG | Closing plugin on server side
	I0920 21:15:47.161305   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.161318   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.161334   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.161364   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.161474   29546 main.go:141] libmachine: (ha-797954) DBG | Closing plugin on server side
	I0920 21:15:47.161485   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.161517   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.161590   29546 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 21:15:47.161611   29546 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 21:15:47.161708   29546 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0920 21:15:47.161717   29546 round_trippers.go:469] Request Headers:
	I0920 21:15:47.161727   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:15:47.161733   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:15:47.162842   29546 main.go:141] libmachine: (ha-797954) DBG | Closing plugin on server side
	I0920 21:15:47.164042   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.164114   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.176849   29546 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0920 21:15:47.177419   29546 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0920 21:15:47.177432   29546 round_trippers.go:469] Request Headers:
	I0920 21:15:47.177439   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:15:47.177443   29546 round_trippers.go:473]     Content-Type: application/json
	I0920 21:15:47.177445   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:15:47.180165   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:15:47.180318   29546 main.go:141] libmachine: Making call to close driver server
	I0920 21:15:47.180329   29546 main.go:141] libmachine: (ha-797954) Calling .Close
	I0920 21:15:47.180614   29546 main.go:141] libmachine: Successfully made call to close driver server
	I0920 21:15:47.180655   29546 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 21:15:47.180655   29546 main.go:141] libmachine: (ha-797954) DBG | Closing plugin on server side
	I0920 21:15:47.182546   29546 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0920 21:15:47.183737   29546 addons.go:510] duration metric: took 1.108869145s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0920 21:15:47.183771   29546 start.go:246] waiting for cluster config update ...
	I0920 21:15:47.183786   29546 start.go:255] writing updated cluster config ...
	I0920 21:15:47.185521   29546 out.go:201] 
	I0920 21:15:47.187156   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:15:47.187233   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:15:47.188857   29546 out.go:177] * Starting "ha-797954-m02" control-plane node in "ha-797954" cluster
	I0920 21:15:47.190069   29546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:15:47.190087   29546 cache.go:56] Caching tarball of preloaded images
	I0920 21:15:47.190183   29546 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 21:15:47.190195   29546 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 21:15:47.190259   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:15:47.190407   29546 start.go:360] acquireMachinesLock for ha-797954-m02: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 21:15:47.190448   29546 start.go:364] duration metric: took 22.919µs to acquireMachinesLock for "ha-797954-m02"
	I0920 21:15:47.190469   29546 start.go:93] Provisioning new machine with config: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:15:47.190532   29546 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0920 21:15:47.192163   29546 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 21:15:47.192261   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:15:47.192300   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:15:47.207012   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37487
	I0920 21:15:47.207450   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:15:47.207959   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:15:47.207987   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:15:47.208333   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:15:47.208512   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetMachineName
	I0920 21:15:47.208644   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:15:47.208810   29546 start.go:159] libmachine.API.Create for "ha-797954" (driver="kvm2")
	I0920 21:15:47.208842   29546 client.go:168] LocalClient.Create starting
	I0920 21:15:47.208877   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 21:15:47.208918   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:15:47.208938   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:15:47.209003   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 21:15:47.209029   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:15:47.209043   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:15:47.209069   29546 main.go:141] libmachine: Running pre-create checks...
	I0920 21:15:47.209080   29546 main.go:141] libmachine: (ha-797954-m02) Calling .PreCreateCheck
	I0920 21:15:47.209235   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetConfigRaw
	I0920 21:15:47.209601   29546 main.go:141] libmachine: Creating machine...
	I0920 21:15:47.209615   29546 main.go:141] libmachine: (ha-797954-m02) Calling .Create
	I0920 21:15:47.209743   29546 main.go:141] libmachine: (ha-797954-m02) Creating KVM machine...
	I0920 21:15:47.211023   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found existing default KVM network
	I0920 21:15:47.211192   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found existing private KVM network mk-ha-797954
	I0920 21:15:47.211306   29546 main.go:141] libmachine: (ha-797954-m02) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02 ...
	I0920 21:15:47.211329   29546 main.go:141] libmachine: (ha-797954-m02) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 21:15:47.211404   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:47.211303   29933 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:15:47.211513   29546 main.go:141] libmachine: (ha-797954-m02) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 21:15:47.448349   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:47.448214   29933 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa...
	I0920 21:15:47.615677   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:47.615554   29933 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/ha-797954-m02.rawdisk...
	I0920 21:15:47.615707   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Writing magic tar header
	I0920 21:15:47.615718   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Writing SSH key tar header
	I0920 21:15:47.615726   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:47.615660   29933 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02 ...
	I0920 21:15:47.615794   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02
	I0920 21:15:47.615812   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02 (perms=drwx------)
	I0920 21:15:47.615820   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 21:15:47.615834   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:15:47.615839   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 21:15:47.615846   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 21:15:47.615854   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 21:15:47.615871   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home/jenkins
	I0920 21:15:47.615884   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Checking permissions on dir: /home
	I0920 21:15:47.615890   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Skipping /home - not owner
	I0920 21:15:47.615899   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 21:15:47.615913   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 21:15:47.615923   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 21:15:47.615931   29546 main.go:141] libmachine: (ha-797954-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 21:15:47.615945   29546 main.go:141] libmachine: (ha-797954-m02) Creating domain...
	I0920 21:15:47.617011   29546 main.go:141] libmachine: (ha-797954-m02) define libvirt domain using xml: 
	I0920 21:15:47.617032   29546 main.go:141] libmachine: (ha-797954-m02) <domain type='kvm'>
	I0920 21:15:47.617040   29546 main.go:141] libmachine: (ha-797954-m02)   <name>ha-797954-m02</name>
	I0920 21:15:47.617044   29546 main.go:141] libmachine: (ha-797954-m02)   <memory unit='MiB'>2200</memory>
	I0920 21:15:47.617050   29546 main.go:141] libmachine: (ha-797954-m02)   <vcpu>2</vcpu>
	I0920 21:15:47.617057   29546 main.go:141] libmachine: (ha-797954-m02)   <features>
	I0920 21:15:47.617064   29546 main.go:141] libmachine: (ha-797954-m02)     <acpi/>
	I0920 21:15:47.617071   29546 main.go:141] libmachine: (ha-797954-m02)     <apic/>
	I0920 21:15:47.617079   29546 main.go:141] libmachine: (ha-797954-m02)     <pae/>
	I0920 21:15:47.617093   29546 main.go:141] libmachine: (ha-797954-m02)     
	I0920 21:15:47.617100   29546 main.go:141] libmachine: (ha-797954-m02)   </features>
	I0920 21:15:47.617108   29546 main.go:141] libmachine: (ha-797954-m02)   <cpu mode='host-passthrough'>
	I0920 21:15:47.617113   29546 main.go:141] libmachine: (ha-797954-m02)   
	I0920 21:15:47.617117   29546 main.go:141] libmachine: (ha-797954-m02)   </cpu>
	I0920 21:15:47.617121   29546 main.go:141] libmachine: (ha-797954-m02)   <os>
	I0920 21:15:47.617128   29546 main.go:141] libmachine: (ha-797954-m02)     <type>hvm</type>
	I0920 21:15:47.617133   29546 main.go:141] libmachine: (ha-797954-m02)     <boot dev='cdrom'/>
	I0920 21:15:47.617140   29546 main.go:141] libmachine: (ha-797954-m02)     <boot dev='hd'/>
	I0920 21:15:47.617147   29546 main.go:141] libmachine: (ha-797954-m02)     <bootmenu enable='no'/>
	I0920 21:15:47.617156   29546 main.go:141] libmachine: (ha-797954-m02)   </os>
	I0920 21:15:47.617173   29546 main.go:141] libmachine: (ha-797954-m02)   <devices>
	I0920 21:15:47.617187   29546 main.go:141] libmachine: (ha-797954-m02)     <disk type='file' device='cdrom'>
	I0920 21:15:47.617195   29546 main.go:141] libmachine: (ha-797954-m02)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/boot2docker.iso'/>
	I0920 21:15:47.617200   29546 main.go:141] libmachine: (ha-797954-m02)       <target dev='hdc' bus='scsi'/>
	I0920 21:15:47.617205   29546 main.go:141] libmachine: (ha-797954-m02)       <readonly/>
	I0920 21:15:47.617212   29546 main.go:141] libmachine: (ha-797954-m02)     </disk>
	I0920 21:15:47.617217   29546 main.go:141] libmachine: (ha-797954-m02)     <disk type='file' device='disk'>
	I0920 21:15:47.617226   29546 main.go:141] libmachine: (ha-797954-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 21:15:47.617233   29546 main.go:141] libmachine: (ha-797954-m02)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/ha-797954-m02.rawdisk'/>
	I0920 21:15:47.617247   29546 main.go:141] libmachine: (ha-797954-m02)       <target dev='hda' bus='virtio'/>
	I0920 21:15:47.617263   29546 main.go:141] libmachine: (ha-797954-m02)     </disk>
	I0920 21:15:47.617275   29546 main.go:141] libmachine: (ha-797954-m02)     <interface type='network'>
	I0920 21:15:47.617287   29546 main.go:141] libmachine: (ha-797954-m02)       <source network='mk-ha-797954'/>
	I0920 21:15:47.617297   29546 main.go:141] libmachine: (ha-797954-m02)       <model type='virtio'/>
	I0920 21:15:47.617304   29546 main.go:141] libmachine: (ha-797954-m02)     </interface>
	I0920 21:15:47.617314   29546 main.go:141] libmachine: (ha-797954-m02)     <interface type='network'>
	I0920 21:15:47.617325   29546 main.go:141] libmachine: (ha-797954-m02)       <source network='default'/>
	I0920 21:15:47.617331   29546 main.go:141] libmachine: (ha-797954-m02)       <model type='virtio'/>
	I0920 21:15:47.617336   29546 main.go:141] libmachine: (ha-797954-m02)     </interface>
	I0920 21:15:47.617350   29546 main.go:141] libmachine: (ha-797954-m02)     <serial type='pty'>
	I0920 21:15:47.617362   29546 main.go:141] libmachine: (ha-797954-m02)       <target port='0'/>
	I0920 21:15:47.617368   29546 main.go:141] libmachine: (ha-797954-m02)     </serial>
	I0920 21:15:47.617379   29546 main.go:141] libmachine: (ha-797954-m02)     <console type='pty'>
	I0920 21:15:47.617391   29546 main.go:141] libmachine: (ha-797954-m02)       <target type='serial' port='0'/>
	I0920 21:15:47.617402   29546 main.go:141] libmachine: (ha-797954-m02)     </console>
	I0920 21:15:47.617411   29546 main.go:141] libmachine: (ha-797954-m02)     <rng model='virtio'>
	I0920 21:15:47.617441   29546 main.go:141] libmachine: (ha-797954-m02)       <backend model='random'>/dev/random</backend>
	I0920 21:15:47.617464   29546 main.go:141] libmachine: (ha-797954-m02)     </rng>
	I0920 21:15:47.617472   29546 main.go:141] libmachine: (ha-797954-m02)     
	I0920 21:15:47.617481   29546 main.go:141] libmachine: (ha-797954-m02)     
	I0920 21:15:47.617489   29546 main.go:141] libmachine: (ha-797954-m02)   </devices>
	I0920 21:15:47.617497   29546 main.go:141] libmachine: (ha-797954-m02) </domain>
	I0920 21:15:47.617508   29546 main.go:141] libmachine: (ha-797954-m02) 
	I0920 21:15:47.624001   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:b4:c0:b8 in network default
	I0920 21:15:47.624558   29546 main.go:141] libmachine: (ha-797954-m02) Ensuring networks are active...
	I0920 21:15:47.624572   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:47.625191   29546 main.go:141] libmachine: (ha-797954-m02) Ensuring network default is active
	I0920 21:15:47.625499   29546 main.go:141] libmachine: (ha-797954-m02) Ensuring network mk-ha-797954 is active
	I0920 21:15:47.625846   29546 main.go:141] libmachine: (ha-797954-m02) Getting domain xml...
	I0920 21:15:47.626575   29546 main.go:141] libmachine: (ha-797954-m02) Creating domain...
	I0920 21:15:48.860761   29546 main.go:141] libmachine: (ha-797954-m02) Waiting to get IP...
	I0920 21:15:48.861461   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:48.861850   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:48.861881   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:48.861842   29933 retry.go:31] will retry after 246.176053ms: waiting for machine to come up
	I0920 21:15:49.109266   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:49.109702   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:49.109741   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:49.109653   29933 retry.go:31] will retry after 305.989794ms: waiting for machine to come up
	I0920 21:15:49.417320   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:49.417758   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:49.417775   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:49.417722   29933 retry.go:31] will retry after 399.439851ms: waiting for machine to come up
	I0920 21:15:49.818420   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:49.818870   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:49.818890   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:49.818832   29933 retry.go:31] will retry after 366.677131ms: waiting for machine to come up
	I0920 21:15:50.187440   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:50.187949   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:50.187984   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:50.187899   29933 retry.go:31] will retry after 641.811509ms: waiting for machine to come up
	I0920 21:15:50.831780   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:50.832244   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:50.832266   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:50.832186   29933 retry.go:31] will retry after 849.351875ms: waiting for machine to come up
	I0920 21:15:51.683009   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:51.683497   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:51.683523   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:51.683449   29933 retry.go:31] will retry after 880.265456ms: waiting for machine to come up
	I0920 21:15:52.564861   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:52.565353   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:52.565377   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:52.565317   29933 retry.go:31] will retry after 1.175823596s: waiting for machine to come up
	I0920 21:15:53.742596   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:53.743093   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:53.743120   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:53.743040   29933 retry.go:31] will retry after 1.383827805s: waiting for machine to come up
	I0920 21:15:55.128520   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:55.129000   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:55.129028   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:55.128941   29933 retry.go:31] will retry after 2.175983574s: waiting for machine to come up
	I0920 21:15:57.306647   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:57.307130   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:57.307166   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:57.307067   29933 retry.go:31] will retry after 2.423186714s: waiting for machine to come up
	I0920 21:15:59.731527   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:15:59.731911   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:15:59.731934   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:15:59.731876   29933 retry.go:31] will retry after 3.330564372s: waiting for machine to come up
	I0920 21:16:03.063640   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:03.063984   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find current IP address of domain ha-797954-m02 in network mk-ha-797954
	I0920 21:16:03.064008   29546 main.go:141] libmachine: (ha-797954-m02) DBG | I0920 21:16:03.063942   29933 retry.go:31] will retry after 4.208745782s: waiting for machine to come up
	I0920 21:16:07.274472   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:07.274913   29546 main.go:141] libmachine: (ha-797954-m02) Found IP for machine: 192.168.39.63
	I0920 21:16:07.274942   29546 main.go:141] libmachine: (ha-797954-m02) Reserving static IP address...
	I0920 21:16:07.274956   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has current primary IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:07.275334   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find host DHCP lease matching {name: "ha-797954-m02", mac: "52:54:00:53:e9:69", ip: "192.168.39.63"} in network mk-ha-797954
	I0920 21:16:07.352055   29546 main.go:141] libmachine: (ha-797954-m02) Reserved static IP address: 192.168.39.63
	I0920 21:16:07.352089   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Getting to WaitForSSH function...
	I0920 21:16:07.352098   29546 main.go:141] libmachine: (ha-797954-m02) Waiting for SSH to be available...
	I0920 21:16:07.354681   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:07.355003   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954
	I0920 21:16:07.355033   29546 main.go:141] libmachine: (ha-797954-m02) DBG | unable to find defined IP address of network mk-ha-797954 interface with MAC address 52:54:00:53:e9:69
	I0920 21:16:07.355155   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using SSH client type: external
	I0920 21:16:07.355181   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa (-rw-------)
	I0920 21:16:07.355212   29546 main.go:141] libmachine: (ha-797954-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:16:07.355231   29546 main.go:141] libmachine: (ha-797954-m02) DBG | About to run SSH command:
	I0920 21:16:07.355247   29546 main.go:141] libmachine: (ha-797954-m02) DBG | exit 0
	I0920 21:16:07.358863   29546 main.go:141] libmachine: (ha-797954-m02) DBG | SSH cmd err, output: exit status 255: 
	I0920 21:16:07.358889   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0920 21:16:07.358899   29546 main.go:141] libmachine: (ha-797954-m02) DBG | command : exit 0
	I0920 21:16:07.358910   29546 main.go:141] libmachine: (ha-797954-m02) DBG | err     : exit status 255
	I0920 21:16:07.358922   29546 main.go:141] libmachine: (ha-797954-m02) DBG | output  : 
	I0920 21:16:10.359142   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Getting to WaitForSSH function...
	I0920 21:16:10.362209   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.362612   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.362643   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.362812   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using SSH client type: external
	I0920 21:16:10.362833   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa (-rw-------)
	I0920 21:16:10.362856   29546 main.go:141] libmachine: (ha-797954-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.63 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:16:10.362864   29546 main.go:141] libmachine: (ha-797954-m02) DBG | About to run SSH command:
	I0920 21:16:10.362874   29546 main.go:141] libmachine: (ha-797954-m02) DBG | exit 0
	I0920 21:16:10.491233   29546 main.go:141] libmachine: (ha-797954-m02) DBG | SSH cmd err, output: <nil>: 
	I0920 21:16:10.491477   29546 main.go:141] libmachine: (ha-797954-m02) KVM machine creation complete!
	I0920 21:16:10.491754   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetConfigRaw
	I0920 21:16:10.492323   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:10.492507   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:10.492661   29546 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 21:16:10.492673   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetState
	I0920 21:16:10.493804   29546 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 21:16:10.493816   29546 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 21:16:10.493821   29546 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 21:16:10.493826   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.495917   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.496311   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.496341   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.496456   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:10.496641   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.496775   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.496906   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:10.497045   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:10.497249   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:10.497260   29546 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 21:16:10.606330   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:16:10.606353   29546 main.go:141] libmachine: Detecting the provisioner...
	I0920 21:16:10.606360   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.608723   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.609117   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.609139   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.609298   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:10.609503   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.609652   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.609759   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:10.609876   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:10.610071   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:10.610085   29546 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 21:16:10.724120   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 21:16:10.724202   29546 main.go:141] libmachine: found compatible host: buildroot
	I0920 21:16:10.724212   29546 main.go:141] libmachine: Provisioning with buildroot...
	I0920 21:16:10.724219   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetMachineName
	I0920 21:16:10.724500   29546 buildroot.go:166] provisioning hostname "ha-797954-m02"
	I0920 21:16:10.724528   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetMachineName
	I0920 21:16:10.724726   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.727578   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.728013   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.728055   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.728155   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:10.728348   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.728482   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.728598   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:10.728757   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:10.728927   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:10.728942   29546 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-797954-m02 && echo "ha-797954-m02" | sudo tee /etc/hostname
	I0920 21:16:10.853869   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-797954-m02
	
	I0920 21:16:10.853893   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.856573   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.856928   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.856950   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.857120   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:10.857298   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.857491   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:10.857632   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:10.857799   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:10.858001   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:10.858027   29546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-797954-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-797954-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-797954-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 21:16:10.976857   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:16:10.976894   29546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 21:16:10.976921   29546 buildroot.go:174] setting up certificates
	I0920 21:16:10.976930   29546 provision.go:84] configureAuth start
	I0920 21:16:10.976939   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetMachineName
	I0920 21:16:10.977193   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetIP
	I0920 21:16:10.979787   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.980127   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.980156   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.980310   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:10.982562   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.982952   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:10.982987   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:10.983144   29546 provision.go:143] copyHostCerts
	I0920 21:16:10.983172   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:16:10.983211   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 21:16:10.983220   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:16:10.983297   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 21:16:10.983390   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:16:10.983415   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 21:16:10.983423   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:16:10.983457   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 21:16:10.983516   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:16:10.983538   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 21:16:10.983547   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:16:10.983578   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 21:16:10.983645   29546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.ha-797954-m02 san=[127.0.0.1 192.168.39.63 ha-797954-m02 localhost minikube]
	I0920 21:16:11.277389   29546 provision.go:177] copyRemoteCerts
	I0920 21:16:11.277450   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 21:16:11.277472   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.280305   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.280630   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.280660   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.280787   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.280985   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.281128   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.281262   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa Username:docker}
	I0920 21:16:11.365413   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 21:16:11.365500   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 21:16:11.392730   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 21:16:11.392804   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 21:16:11.418158   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 21:16:11.418231   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 21:16:11.442606   29546 provision.go:87] duration metric: took 465.662046ms to configureAuth
	I0920 21:16:11.442634   29546 buildroot.go:189] setting minikube options for container-runtime
	I0920 21:16:11.442797   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:16:11.442863   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.445626   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.446004   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.446037   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.446193   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.446437   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.446574   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.446682   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.446829   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:11.447016   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:11.447037   29546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 21:16:11.680912   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 21:16:11.680937   29546 main.go:141] libmachine: Checking connection to Docker...
	I0920 21:16:11.680947   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetURL
	I0920 21:16:11.682272   29546 main.go:141] libmachine: (ha-797954-m02) DBG | Using libvirt version 6000000
	I0920 21:16:11.684914   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.685189   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.685217   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.685358   29546 main.go:141] libmachine: Docker is up and running!
	I0920 21:16:11.685371   29546 main.go:141] libmachine: Reticulating splines...
	I0920 21:16:11.685378   29546 client.go:171] duration metric: took 24.476526726s to LocalClient.Create
	I0920 21:16:11.685403   29546 start.go:167] duration metric: took 24.47659369s to libmachine.API.Create "ha-797954"
	I0920 21:16:11.685417   29546 start.go:293] postStartSetup for "ha-797954-m02" (driver="kvm2")
	I0920 21:16:11.685429   29546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 21:16:11.685450   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.685687   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 21:16:11.685714   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.687637   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.687978   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.688009   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.688123   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.688289   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.688429   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.688559   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa Username:docker}
	I0920 21:16:11.773487   29546 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 21:16:11.777926   29546 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 21:16:11.777968   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 21:16:11.778045   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 21:16:11.778142   29546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 21:16:11.778153   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /etc/ssl/certs/167852.pem
	I0920 21:16:11.778236   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 21:16:11.787777   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:16:11.814167   29546 start.go:296] duration metric: took 128.737842ms for postStartSetup
	I0920 21:16:11.814211   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetConfigRaw
	I0920 21:16:11.814756   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetIP
	I0920 21:16:11.817446   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.817813   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.817843   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.818091   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:16:11.818271   29546 start.go:128] duration metric: took 24.627729213s to createHost
	I0920 21:16:11.818293   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.820157   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.820420   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.820457   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.820597   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.820754   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.820876   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.820975   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.821154   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:16:11.821361   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.63 22 <nil> <nil>}
	I0920 21:16:11.821372   29546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 21:16:11.935903   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726866971.911740237
	
	I0920 21:16:11.935925   29546 fix.go:216] guest clock: 1726866971.911740237
	I0920 21:16:11.935931   29546 fix.go:229] Guest: 2024-09-20 21:16:11.911740237 +0000 UTC Remote: 2024-09-20 21:16:11.818282234 +0000 UTC m=+73.759482878 (delta=93.458003ms)
	I0920 21:16:11.935947   29546 fix.go:200] guest clock delta is within tolerance: 93.458003ms
	I0920 21:16:11.935951   29546 start.go:83] releasing machines lock for "ha-797954-m02", held for 24.74549273s
	I0920 21:16:11.935977   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.936247   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetIP
	I0920 21:16:11.938751   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.939097   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.939121   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.941536   29546 out.go:177] * Found network options:
	I0920 21:16:11.943384   29546 out.go:177]   - NO_PROXY=192.168.39.240
	W0920 21:16:11.944703   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 21:16:11.944730   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.945270   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.945436   29546 main.go:141] libmachine: (ha-797954-m02) Calling .DriverName
	I0920 21:16:11.945528   29546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 21:16:11.945565   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	W0920 21:16:11.945595   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 21:16:11.945670   29546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 21:16:11.945694   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHHostname
	I0920 21:16:11.948268   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.948343   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.948669   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.948696   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.948719   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:11.948739   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:11.948891   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.948988   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHPort
	I0920 21:16:11.949074   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.949141   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHKeyPath
	I0920 21:16:11.949205   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.949260   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetSSHUsername
	I0920 21:16:11.949324   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa Username:docker}
	I0920 21:16:11.949401   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.63 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m02/id_rsa Username:docker}
	I0920 21:16:12.182357   29546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 21:16:12.188336   29546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 21:16:12.188410   29546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 21:16:12.207463   29546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 21:16:12.207490   29546 start.go:495] detecting cgroup driver to use...
	I0920 21:16:12.207553   29546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 21:16:12.224528   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 21:16:12.239330   29546 docker.go:217] disabling cri-docker service (if available) ...
	I0920 21:16:12.239398   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 21:16:12.254085   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 21:16:12.268825   29546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 21:16:12.382055   29546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 21:16:12.535413   29546 docker.go:233] disabling docker service ...
	I0920 21:16:12.535492   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 21:16:12.551349   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 21:16:12.564334   29546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 21:16:12.703477   29546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 21:16:12.833908   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 21:16:12.848370   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 21:16:12.867636   29546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 21:16:12.867695   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.878698   29546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 21:16:12.878757   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.889749   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.900887   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.911745   29546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 21:16:12.922758   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.933646   29546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.951100   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:16:12.961869   29546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 21:16:12.971480   29546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 21:16:12.971539   29546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 21:16:12.984107   29546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 21:16:12.993914   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:16:13.101378   29546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 21:16:13.200043   29546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 21:16:13.200113   29546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 21:16:13.204910   29546 start.go:563] Will wait 60s for crictl version
	I0920 21:16:13.204975   29546 ssh_runner.go:195] Run: which crictl
	I0920 21:16:13.208759   29546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 21:16:13.247360   29546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 21:16:13.247450   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:16:13.276920   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:16:13.308600   29546 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 21:16:13.310088   29546 out.go:177]   - env NO_PROXY=192.168.39.240
	I0920 21:16:13.311453   29546 main.go:141] libmachine: (ha-797954-m02) Calling .GetIP
	I0920 21:16:13.313821   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:13.314157   29546 main.go:141] libmachine: (ha-797954-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:e9:69", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:16:02 +0000 UTC Type:0 Mac:52:54:00:53:e9:69 Iaid: IPaddr:192.168.39.63 Prefix:24 Hostname:ha-797954-m02 Clientid:01:52:54:00:53:e9:69}
	I0920 21:16:13.314177   29546 main.go:141] libmachine: (ha-797954-m02) DBG | domain ha-797954-m02 has defined IP address 192.168.39.63 and MAC address 52:54:00:53:e9:69 in network mk-ha-797954
	I0920 21:16:13.314359   29546 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 21:16:13.318746   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:16:13.331582   29546 mustload.go:65] Loading cluster: ha-797954
	I0920 21:16:13.331825   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:16:13.332085   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:16:13.332122   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:16:13.347535   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0920 21:16:13.348063   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:16:13.348528   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:16:13.348548   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:16:13.348889   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:16:13.349049   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:16:13.350568   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:16:13.350840   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:16:13.350872   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:16:13.365373   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I0920 21:16:13.365792   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:16:13.366261   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:16:13.366281   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:16:13.366592   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:16:13.366760   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:16:13.366907   29546 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954 for IP: 192.168.39.63
	I0920 21:16:13.366920   29546 certs.go:194] generating shared ca certs ...
	I0920 21:16:13.366937   29546 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:16:13.367097   29546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 21:16:13.367133   29546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 21:16:13.367142   29546 certs.go:256] generating profile certs ...
	I0920 21:16:13.367211   29546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key
	I0920 21:16:13.367235   29546 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.6519369c
	I0920 21:16:13.367251   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.6519369c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.63 192.168.39.254]
	I0920 21:16:13.455682   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.6519369c ...
	I0920 21:16:13.455713   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.6519369c: {Name:mkf7fe9b668446e8a4052aa048fbbf7677e2fbab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:16:13.455914   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.6519369c ...
	I0920 21:16:13.455929   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.6519369c: {Name:mk7e6a911b0af488356d39a10736e055b37e6a6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:16:13.456006   29546 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.6519369c -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt
	I0920 21:16:13.456131   29546 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.6519369c -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key
	I0920 21:16:13.456249   29546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key
	I0920 21:16:13.456263   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 21:16:13.456276   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 21:16:13.456287   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 21:16:13.456297   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 21:16:13.456308   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 21:16:13.456317   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 21:16:13.456327   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 21:16:13.456340   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 21:16:13.456384   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 21:16:13.456413   29546 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 21:16:13.456423   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 21:16:13.456445   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 21:16:13.456466   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 21:16:13.456487   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 21:16:13.456523   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:16:13.456549   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /usr/share/ca-certificates/167852.pem
	I0920 21:16:13.456563   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:16:13.456574   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem -> /usr/share/ca-certificates/16785.pem
	I0920 21:16:13.456603   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:16:13.459546   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:16:13.459920   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:16:13.459938   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:16:13.460181   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:16:13.460348   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:16:13.460507   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:16:13.460594   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:16:13.535461   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 21:16:13.541124   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 21:16:13.556682   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 21:16:13.564013   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0920 21:16:13.575182   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 21:16:13.579951   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 21:16:13.591192   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 21:16:13.595498   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0920 21:16:13.606893   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 21:16:13.611454   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 21:16:13.622000   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 21:16:13.627844   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0920 21:16:13.640161   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 21:16:13.667082   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 21:16:13.691675   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 21:16:13.718752   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 21:16:13.742806   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 21:16:13.767051   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 21:16:13.790981   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 21:16:13.815475   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 21:16:13.840035   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 21:16:13.864033   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 21:16:13.887540   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 21:16:13.912601   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 21:16:13.929455   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0920 21:16:13.946232   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 21:16:13.962927   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0920 21:16:13.979514   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 21:16:13.996480   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0920 21:16:14.013458   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 21:16:14.030697   29546 ssh_runner.go:195] Run: openssl version
	I0920 21:16:14.036567   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 21:16:14.047523   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 21:16:14.052253   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:16:14.052312   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 21:16:14.058027   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 21:16:14.068874   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 21:16:14.080054   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:16:14.084884   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:16:14.084969   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:16:14.091061   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 21:16:14.102348   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 21:16:14.113259   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 21:16:14.117584   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:16:14.117628   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 21:16:14.123165   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 21:16:14.133845   29546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:16:14.137859   29546 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 21:16:14.137916   29546 kubeadm.go:934] updating node {m02 192.168.39.63 8443 v1.31.1 crio true true} ...
	I0920 21:16:14.138011   29546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-797954-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.63
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 21:16:14.138044   29546 kube-vip.go:115] generating kube-vip config ...
	I0920 21:16:14.138082   29546 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 21:16:14.157200   29546 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 21:16:14.157256   29546 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 21:16:14.157308   29546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 21:16:14.167159   29546 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 21:16:14.167253   29546 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 21:16:14.176775   29546 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 21:16:14.176800   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 21:16:14.176860   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 21:16:14.176869   29546 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0920 21:16:14.176896   29546 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0920 21:16:14.181293   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 21:16:14.181317   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 21:16:14.830604   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 21:16:14.830707   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 21:16:14.836995   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 21:16:14.837029   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 21:16:14.885458   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:16:14.921200   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 21:16:14.921297   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 21:16:14.928788   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 21:16:14.928843   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 21:16:15.405307   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 21:16:15.415161   29546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 21:16:15.432203   29546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 21:16:15.449139   29546 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 21:16:15.466408   29546 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 21:16:15.470447   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:16:15.483385   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:16:15.605870   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:16:15.622726   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:16:15.623063   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:16:15.623100   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:16:15.638072   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45517
	I0920 21:16:15.638612   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:16:15.639116   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:16:15.639140   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:16:15.639467   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:16:15.639639   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:16:15.639767   29546 start.go:317] joinCluster: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:16:15.639881   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 21:16:15.639903   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:16:15.642798   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:16:15.643197   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:16:15.643225   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:16:15.643351   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:16:15.643513   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:16:15.643635   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:16:15.643759   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:16:15.792657   29546 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:16:15.792709   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kdqr52.0bg1ag3l84e32pbu --discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-797954-m02 --control-plane --apiserver-advertise-address=192.168.39.63 --apiserver-bind-port=8443"
	I0920 21:16:37.363850   29546 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token kdqr52.0bg1ag3l84e32pbu --discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-797954-m02 --control-plane --apiserver-advertise-address=192.168.39.63 --apiserver-bind-port=8443": (21.571119677s)
	I0920 21:16:37.363880   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 21:16:37.826315   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-797954-m02 minikube.k8s.io/updated_at=2024_09_20T21_16_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=ha-797954 minikube.k8s.io/primary=false
	I0920 21:16:37.965429   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-797954-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 21:16:38.074454   29546 start.go:319] duration metric: took 22.434680822s to joinCluster
	I0920 21:16:38.074530   29546 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:16:38.074847   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:16:38.076237   29546 out.go:177] * Verifying Kubernetes components...
	I0920 21:16:38.077911   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:16:38.297124   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:16:38.315206   29546 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:16:38.315425   29546 kapi.go:59] client config for ha-797954: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key", CAFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 21:16:38.315479   29546 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.240:8443
	I0920 21:16:38.315667   29546 node_ready.go:35] waiting up to 6m0s for node "ha-797954-m02" to be "Ready" ...
	I0920 21:16:38.315752   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:38.315761   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:38.315768   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:38.315773   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:38.327510   29546 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0920 21:16:38.816509   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:38.816531   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:38.816539   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:38.816543   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:38.820223   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:39.316657   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:39.316678   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:39.316687   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:39.316692   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:39.320207   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:39.816039   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:39.816063   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:39.816076   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:39.816081   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:39.819353   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:40.316075   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:40.316098   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:40.316106   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:40.316111   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:40.319539   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:40.320391   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:40.816105   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:40.816128   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:40.816139   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:40.816143   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:40.819586   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:41.316211   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:41.316235   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:41.316245   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:41.316249   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:41.349189   29546 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0920 21:16:41.816015   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:41.816038   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:41.816046   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:41.816049   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:41.819036   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:42.316442   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:42.316466   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:42.316476   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:42.316482   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:42.319998   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:42.320895   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:42.816283   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:42.816308   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:42.816320   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:42.816325   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:42.819743   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:43.316284   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:43.316312   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:43.316323   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:43.316329   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:43.319785   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:43.816709   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:43.816731   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:43.816742   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:43.816747   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:43.820871   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:16:44.316171   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:44.316193   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:44.316204   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:44.316208   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:44.319512   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:44.816802   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:44.816826   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:44.816834   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:44.816839   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:44.820164   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:44.820813   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:45.315957   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:45.315983   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:45.315990   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:45.315994   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:45.322831   29546 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 21:16:45.816465   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:45.816487   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:45.816493   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:45.816499   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:45.820400   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:46.316676   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:46.316696   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:46.316705   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:46.316710   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:46.320533   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:46.816215   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:46.816236   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:46.816245   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:46.816250   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:46.820038   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:47.316157   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:47.316175   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:47.316182   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:47.316187   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:47.319007   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:47.319706   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:47.816061   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:47.816084   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:47.816093   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:47.816099   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:47.819550   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:48.316265   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:48.316297   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:48.316309   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:48.316315   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:48.324229   29546 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0920 21:16:48.816174   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:48.816197   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:48.816207   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:48.816212   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:48.819164   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:49.316193   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:49.316219   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:49.316227   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:49.316231   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:49.319642   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:49.320250   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:49.816604   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:49.816629   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:49.816639   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:49.816643   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:49.820105   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:50.316008   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:50.316032   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:50.316043   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:50.316050   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:50.320037   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:50.816542   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:50.816569   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:50.816578   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:50.816584   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:50.819913   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:51.316794   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:51.316833   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:51.316842   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:51.316847   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:51.320154   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:51.320874   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:51.816190   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:51.816232   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:51.816240   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:51.816245   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:51.819606   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:52.316313   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:52.316334   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:52.316342   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:52.316345   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:52.319035   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:52.815947   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:52.815976   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:52.815984   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:52.815989   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:52.819657   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:53.316260   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:53.316286   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:53.316296   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:53.316302   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:53.319552   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:53.816326   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:53.816347   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:53.816357   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:53.816361   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:53.820253   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:53.820949   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:54.316293   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:54.316325   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:54.316345   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:54.316351   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:54.320156   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:54.816089   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:54.816111   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:54.816119   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:54.816122   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:54.819881   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:55.316810   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:55.316833   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:55.316841   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:55.316844   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:55.320308   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:55.816768   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:55.816797   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:55.816808   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:55.816814   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:55.820631   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:55.821331   29546 node_ready.go:53] node "ha-797954-m02" has status "Ready":"False"
	I0920 21:16:56.315901   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:56.315944   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:56.315956   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:56.315969   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:56.320010   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:16:56.816119   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:56.816140   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:56.816154   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:56.816158   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:56.820007   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:57.316463   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:57.316490   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:57.316501   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:57.316509   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:57.320023   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:57.815912   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:57.815949   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:57.815957   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:57.815962   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:57.819365   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:58.316195   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:58.316218   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.316226   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.316230   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.320091   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:58.320848   29546 node_ready.go:49] node "ha-797954-m02" has status "Ready":"True"
	I0920 21:16:58.320865   29546 node_ready.go:38] duration metric: took 20.005171668s for node "ha-797954-m02" to be "Ready" ...
	I0920 21:16:58.320873   29546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 21:16:58.320944   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:16:58.320955   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.320962   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.320966   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.326211   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:16:58.332482   29546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.332585   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bkcf8
	I0920 21:16:58.332594   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.332601   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.332605   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.336131   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:58.337007   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.337022   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.337031   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.337034   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.339737   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.340269   29546 pod_ready.go:93] pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.340285   29546 pod_ready.go:82] duration metric: took 7.773484ms for pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.340294   29546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.340345   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-djhfr
	I0920 21:16:58.340352   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.340359   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.340363   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.343186   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.343740   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.343754   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.343761   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.343764   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.346434   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.347695   29546 pod_ready.go:93] pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.347713   29546 pod_ready.go:82] duration metric: took 7.413319ms for pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.347723   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.347819   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954
	I0920 21:16:58.347828   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.347835   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.347838   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.350799   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.351620   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.351636   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.351645   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.351650   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.354248   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.354637   29546 pod_ready.go:93] pod "etcd-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.354653   29546 pod_ready.go:82] duration metric: took 6.924523ms for pod "etcd-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.354662   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.354733   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954-m02
	I0920 21:16:58.354740   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.354747   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.354750   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.357249   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.357873   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:58.357891   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.357902   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.357909   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.360353   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:16:58.360737   29546 pod_ready.go:93] pod "etcd-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.360754   29546 pod_ready.go:82] duration metric: took 6.065117ms for pod "etcd-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.360772   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.517254   29546 request.go:632] Waited for 156.41306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954
	I0920 21:16:58.517351   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954
	I0920 21:16:58.517362   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.517375   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.517386   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.520791   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:58.716729   29546 request.go:632] Waited for 195.35622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.716806   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:58.716813   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.716825   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.716832   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.720994   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:16:58.721498   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:58.721522   29546 pod_ready.go:82] duration metric: took 360.733755ms for pod "kube-apiserver-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.721535   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:58.916778   29546 request.go:632] Waited for 195.15835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m02
	I0920 21:16:58.916853   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m02
	I0920 21:16:58.916860   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:58.916871   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:58.916879   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:58.920176   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.116231   29546 request.go:632] Waited for 195.32753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:59.116326   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:59.116335   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.116348   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.116363   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.120233   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.120679   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:59.120701   29546 pod_ready.go:82] duration metric: took 399.159254ms for pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.120711   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.316770   29546 request.go:632] Waited for 195.998601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954
	I0920 21:16:59.316829   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954
	I0920 21:16:59.316835   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.316843   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.316848   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.320380   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.516308   29546 request.go:632] Waited for 195.285053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:59.516365   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:16:59.516370   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.516385   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.516408   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.519762   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.520332   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:59.520354   29546 pod_ready.go:82] duration metric: took 399.636336ms for pod "kube-controller-manager-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.520366   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.716282   29546 request.go:632] Waited for 195.838293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m02
	I0920 21:16:59.716348   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m02
	I0920 21:16:59.716355   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.716364   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.716369   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.719757   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.916877   29546 request.go:632] Waited for 196.253642ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:59.916926   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:16:59.916930   29546 round_trippers.go:469] Request Headers:
	I0920 21:16:59.916941   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:16:59.916954   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:16:59.920360   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:16:59.920904   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:16:59.920927   29546 pod_ready.go:82] duration metric: took 400.552955ms for pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:16:59.920940   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2pjxq" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.116981   29546 request.go:632] Waited for 195.965997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pjxq
	I0920 21:17:00.117046   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pjxq
	I0920 21:17:00.117052   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.117059   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.117064   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.120477   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:00.316320   29546 request.go:632] Waited for 195.169192ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:17:00.316423   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:17:00.316431   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.316441   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.316453   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.319724   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:00.320229   29546 pod_ready.go:93] pod "kube-proxy-2pjxq" in "kube-system" namespace has status "Ready":"True"
	I0920 21:17:00.320259   29546 pod_ready.go:82] duration metric: took 399.311666ms for pod "kube-proxy-2pjxq" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.320270   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qx9ph" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.516281   29546 request.go:632] Waited for 195.931067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qx9ph
	I0920 21:17:00.516373   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qx9ph
	I0920 21:17:00.516383   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.516394   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.516403   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.521923   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:17:00.716927   29546 request.go:632] Waited for 194.244639ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:17:00.716994   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:17:00.717002   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.717011   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.717027   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.720660   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:00.721230   29546 pod_ready.go:93] pod "kube-proxy-qx9ph" in "kube-system" namespace has status "Ready":"True"
	I0920 21:17:00.721249   29546 pod_ready.go:82] duration metric: took 400.97344ms for pod "kube-proxy-qx9ph" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.721259   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:00.916498   29546 request.go:632] Waited for 195.181307ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954
	I0920 21:17:00.916573   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954
	I0920 21:17:00.916580   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:00.916588   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:00.916592   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:00.920596   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:01.116571   29546 request.go:632] Waited for 195.40032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:17:01.116628   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:17:01.116634   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.116641   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.116643   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.119984   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:01.120450   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:17:01.120470   29546 pod_ready.go:82] duration metric: took 399.204122ms for pod "kube-scheduler-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:01.120483   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:01.316631   29546 request.go:632] Waited for 196.084204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m02
	I0920 21:17:01.316697   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m02
	I0920 21:17:01.316703   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.316710   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.316712   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.320693   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:01.516582   29546 request.go:632] Waited for 195.353132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:17:01.516664   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:17:01.516673   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.516684   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.516692   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.519980   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:01.520627   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:17:01.520645   29546 pod_ready.go:82] duration metric: took 400.154808ms for pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:17:01.520658   29546 pod_ready.go:39] duration metric: took 3.19977376s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 21:17:01.520675   29546 api_server.go:52] waiting for apiserver process to appear ...
	I0920 21:17:01.520730   29546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 21:17:01.537086   29546 api_server.go:72] duration metric: took 23.462517508s to wait for apiserver process to appear ...
	I0920 21:17:01.537114   29546 api_server.go:88] waiting for apiserver healthz status ...
	I0920 21:17:01.537136   29546 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0920 21:17:01.541714   29546 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0920 21:17:01.541802   29546 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0920 21:17:01.541813   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.541825   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.541834   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.542686   29546 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0920 21:17:01.542806   29546 api_server.go:141] control plane version: v1.31.1
	I0920 21:17:01.542827   29546 api_server.go:131] duration metric: took 5.705549ms to wait for apiserver health ...
	I0920 21:17:01.542841   29546 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 21:17:01.717246   29546 request.go:632] Waited for 174.332661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:17:01.717348   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:17:01.717358   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.717367   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.717379   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.722159   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:01.726592   29546 system_pods.go:59] 17 kube-system pods found
	I0920 21:17:01.726620   29546 system_pods.go:61] "coredns-7c65d6cfc9-bkcf8" [5d5aca96-de3a-4154-b195-b69aa45e2cfc] Running
	I0920 21:17:01.726625   29546 system_pods.go:61] "coredns-7c65d6cfc9-djhfr" [a8ca35db-3f4c-4e4c-a9e4-e12abecff539] Running
	I0920 21:17:01.726629   29546 system_pods.go:61] "etcd-ha-797954" [ce647dfd-9ff8-483e-ac64-e6cbed777dfa] Running
	I0920 21:17:01.726632   29546 system_pods.go:61] "etcd-ha-797954-m02" [d8c6a164-a169-482f-a05f-0e723b947424] Running
	I0920 21:17:01.726636   29546 system_pods.go:61] "kindnet-6lk2f" [ddd6c393-68ce-4fff-9f0d-6e0147e03926] Running
	I0920 21:17:01.726639   29546 system_pods.go:61] "kindnet-qh5rt" [2036234c-15c7-43da-ac52-cdac77f1a721] Running
	I0920 21:17:01.726642   29546 system_pods.go:61] "kube-apiserver-ha-797954" [91b4d438-70c1-4335-a0eb-9d48bd470b37] Running
	I0920 21:17:01.726645   29546 system_pods.go:61] "kube-apiserver-ha-797954-m02" [d0634b88-e668-4be0-b175-543f5f496efa] Running
	I0920 21:17:01.726650   29546 system_pods.go:61] "kube-controller-manager-ha-797954" [7bcf4459-7067-4f10-a79d-a4cd0017fffb] Running
	I0920 21:17:01.726654   29546 system_pods.go:61] "kube-controller-manager-ha-797954-m02" [4366a95b-15ec-42c7-916d-14eb0dba314d] Running
	I0920 21:17:01.726657   29546 system_pods.go:61] "kube-proxy-2pjxq" [bbd40eaa-4d03-491a-96c6-ef4cedc6401a] Running
	I0920 21:17:01.726660   29546 system_pods.go:61] "kube-proxy-qx9ph" [b3d20fb2-8eff-4921-add4-ee36b8fba334] Running
	I0920 21:17:01.726662   29546 system_pods.go:61] "kube-scheduler-ha-797954" [a55760dc-c4e4-4cec-b8d1-04512bd9ad33] Running
	I0920 21:17:01.726665   29546 system_pods.go:61] "kube-scheduler-ha-797954-m02" [4a53607e-2d10-4fcb-8697-bf042189871d] Running
	I0920 21:17:01.726668   29546 system_pods.go:61] "kube-vip-ha-797954" [ad002c23-a891-4bf6-bbb7-bba95f43b1d0] Running
	I0920 21:17:01.726671   29546 system_pods.go:61] "kube-vip-ha-797954-m02" [09b5b6c2-96e7-43f1-a53e-21513fc048ed] Running
	I0920 21:17:01.726674   29546 system_pods.go:61] "storage-provisioner" [05385037-e9ea-4afb-a256-3bfae42e00f5] Running
	I0920 21:17:01.726679   29546 system_pods.go:74] duration metric: took 183.829123ms to wait for pod list to return data ...
	I0920 21:17:01.726686   29546 default_sa.go:34] waiting for default service account to be created ...
	I0920 21:17:01.917125   29546 request.go:632] Waited for 190.363748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0920 21:17:01.917200   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0920 21:17:01.917205   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:01.917212   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:01.917217   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:01.921868   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:01.922074   29546 default_sa.go:45] found service account: "default"
	I0920 21:17:01.922087   29546 default_sa.go:55] duration metric: took 195.393903ms for default service account to be created ...
	I0920 21:17:01.922095   29546 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 21:17:02.116523   29546 request.go:632] Waited for 194.354422ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:17:02.116574   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:17:02.116579   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:02.116585   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:02.116590   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:02.121031   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:02.127015   29546 system_pods.go:86] 17 kube-system pods found
	I0920 21:17:02.127040   29546 system_pods.go:89] "coredns-7c65d6cfc9-bkcf8" [5d5aca96-de3a-4154-b195-b69aa45e2cfc] Running
	I0920 21:17:02.127047   29546 system_pods.go:89] "coredns-7c65d6cfc9-djhfr" [a8ca35db-3f4c-4e4c-a9e4-e12abecff539] Running
	I0920 21:17:02.127052   29546 system_pods.go:89] "etcd-ha-797954" [ce647dfd-9ff8-483e-ac64-e6cbed777dfa] Running
	I0920 21:17:02.127056   29546 system_pods.go:89] "etcd-ha-797954-m02" [d8c6a164-a169-482f-a05f-0e723b947424] Running
	I0920 21:17:02.127060   29546 system_pods.go:89] "kindnet-6lk2f" [ddd6c393-68ce-4fff-9f0d-6e0147e03926] Running
	I0920 21:17:02.127064   29546 system_pods.go:89] "kindnet-qh5rt" [2036234c-15c7-43da-ac52-cdac77f1a721] Running
	I0920 21:17:02.127067   29546 system_pods.go:89] "kube-apiserver-ha-797954" [91b4d438-70c1-4335-a0eb-9d48bd470b37] Running
	I0920 21:17:02.127071   29546 system_pods.go:89] "kube-apiserver-ha-797954-m02" [d0634b88-e668-4be0-b175-543f5f496efa] Running
	I0920 21:17:02.127076   29546 system_pods.go:89] "kube-controller-manager-ha-797954" [7bcf4459-7067-4f10-a79d-a4cd0017fffb] Running
	I0920 21:17:02.127079   29546 system_pods.go:89] "kube-controller-manager-ha-797954-m02" [4366a95b-15ec-42c7-916d-14eb0dba314d] Running
	I0920 21:17:02.127082   29546 system_pods.go:89] "kube-proxy-2pjxq" [bbd40eaa-4d03-491a-96c6-ef4cedc6401a] Running
	I0920 21:17:02.127086   29546 system_pods.go:89] "kube-proxy-qx9ph" [b3d20fb2-8eff-4921-add4-ee36b8fba334] Running
	I0920 21:17:02.127089   29546 system_pods.go:89] "kube-scheduler-ha-797954" [a55760dc-c4e4-4cec-b8d1-04512bd9ad33] Running
	I0920 21:17:02.127093   29546 system_pods.go:89] "kube-scheduler-ha-797954-m02" [4a53607e-2d10-4fcb-8697-bf042189871d] Running
	I0920 21:17:02.127096   29546 system_pods.go:89] "kube-vip-ha-797954" [ad002c23-a891-4bf6-bbb7-bba95f43b1d0] Running
	I0920 21:17:02.127099   29546 system_pods.go:89] "kube-vip-ha-797954-m02" [09b5b6c2-96e7-43f1-a53e-21513fc048ed] Running
	I0920 21:17:02.127103   29546 system_pods.go:89] "storage-provisioner" [05385037-e9ea-4afb-a256-3bfae42e00f5] Running
	I0920 21:17:02.127109   29546 system_pods.go:126] duration metric: took 205.01008ms to wait for k8s-apps to be running ...
	I0920 21:17:02.127118   29546 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 21:17:02.127161   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:17:02.143916   29546 system_svc.go:56] duration metric: took 16.788182ms WaitForService to wait for kubelet
	I0920 21:17:02.143952   29546 kubeadm.go:582] duration metric: took 24.069387794s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 21:17:02.143974   29546 node_conditions.go:102] verifying NodePressure condition ...
	I0920 21:17:02.316696   29546 request.go:632] Waited for 172.648744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
	I0920 21:17:02.316747   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0920 21:17:02.316752   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:02.316763   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:02.316768   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:02.320760   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:02.321468   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:17:02.321490   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:17:02.321503   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:17:02.321508   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:17:02.321513   29546 node_conditions.go:105] duration metric: took 177.533781ms to run NodePressure ...
	I0920 21:17:02.321541   29546 start.go:241] waiting for startup goroutines ...
	I0920 21:17:02.321567   29546 start.go:255] writing updated cluster config ...
	I0920 21:17:02.323943   29546 out.go:201] 
	I0920 21:17:02.325378   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:17:02.325468   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:17:02.326982   29546 out.go:177] * Starting "ha-797954-m03" control-plane node in "ha-797954" cluster
	I0920 21:17:02.328317   29546 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:17:02.328336   29546 cache.go:56] Caching tarball of preloaded images
	I0920 21:17:02.328441   29546 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 21:17:02.328453   29546 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 21:17:02.328536   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:17:02.328735   29546 start.go:360] acquireMachinesLock for ha-797954-m03: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 21:17:02.328796   29546 start.go:364] duration metric: took 33.998µs to acquireMachinesLock for "ha-797954-m03"
	I0920 21:17:02.328812   29546 start.go:93] Provisioning new machine with config: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor
-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:17:02.328901   29546 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0920 21:17:02.330315   29546 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 21:17:02.330414   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:17:02.330461   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:17:02.345778   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
	I0920 21:17:02.346279   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:17:02.346747   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:17:02.346772   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:17:02.347086   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:17:02.347350   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetMachineName
	I0920 21:17:02.347468   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:02.347668   29546 start.go:159] libmachine.API.Create for "ha-797954" (driver="kvm2")
	I0920 21:17:02.347694   29546 client.go:168] LocalClient.Create starting
	I0920 21:17:02.347736   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 21:17:02.347785   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:17:02.347800   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:17:02.347860   29546 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 21:17:02.347881   29546 main.go:141] libmachine: Decoding PEM data...
	I0920 21:17:02.347891   29546 main.go:141] libmachine: Parsing certificate...
	I0920 21:17:02.347908   29546 main.go:141] libmachine: Running pre-create checks...
	I0920 21:17:02.347919   29546 main.go:141] libmachine: (ha-797954-m03) Calling .PreCreateCheck
	I0920 21:17:02.348067   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetConfigRaw
	I0920 21:17:02.348542   29546 main.go:141] libmachine: Creating machine...
	I0920 21:17:02.348556   29546 main.go:141] libmachine: (ha-797954-m03) Calling .Create
	I0920 21:17:02.348764   29546 main.go:141] libmachine: (ha-797954-m03) Creating KVM machine...
	I0920 21:17:02.350052   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found existing default KVM network
	I0920 21:17:02.350134   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found existing private KVM network mk-ha-797954
	I0920 21:17:02.350236   29546 main.go:141] libmachine: (ha-797954-m03) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03 ...
	I0920 21:17:02.350270   29546 main.go:141] libmachine: (ha-797954-m03) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 21:17:02.350391   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:02.350253   30348 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:17:02.350445   29546 main.go:141] libmachine: (ha-797954-m03) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 21:17:02.577042   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:02.576911   30348 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa...
	I0920 21:17:02.643000   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:02.642827   30348 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/ha-797954-m03.rawdisk...
	I0920 21:17:02.643045   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Writing magic tar header
	I0920 21:17:02.643062   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Writing SSH key tar header
	I0920 21:17:02.643075   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:02.642998   30348 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03 ...
	I0920 21:17:02.643166   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03
	I0920 21:17:02.643195   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03 (perms=drwx------)
	I0920 21:17:02.643206   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 21:17:02.643220   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:17:02.643226   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 21:17:02.643312   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 21:17:02.643333   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 21:17:02.643345   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 21:17:02.643356   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home/jenkins
	I0920 21:17:02.643367   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 21:17:02.643381   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 21:17:02.643392   29546 main.go:141] libmachine: (ha-797954-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 21:17:02.643412   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Checking permissions on dir: /home
	I0920 21:17:02.643422   29546 main.go:141] libmachine: (ha-797954-m03) Creating domain...
	I0920 21:17:02.643428   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Skipping /home - not owner
	I0920 21:17:02.644451   29546 main.go:141] libmachine: (ha-797954-m03) define libvirt domain using xml: 
	I0920 21:17:02.644469   29546 main.go:141] libmachine: (ha-797954-m03) <domain type='kvm'>
	I0920 21:17:02.644475   29546 main.go:141] libmachine: (ha-797954-m03)   <name>ha-797954-m03</name>
	I0920 21:17:02.644480   29546 main.go:141] libmachine: (ha-797954-m03)   <memory unit='MiB'>2200</memory>
	I0920 21:17:02.644485   29546 main.go:141] libmachine: (ha-797954-m03)   <vcpu>2</vcpu>
	I0920 21:17:02.644491   29546 main.go:141] libmachine: (ha-797954-m03)   <features>
	I0920 21:17:02.644498   29546 main.go:141] libmachine: (ha-797954-m03)     <acpi/>
	I0920 21:17:02.644507   29546 main.go:141] libmachine: (ha-797954-m03)     <apic/>
	I0920 21:17:02.644515   29546 main.go:141] libmachine: (ha-797954-m03)     <pae/>
	I0920 21:17:02.644524   29546 main.go:141] libmachine: (ha-797954-m03)     
	I0920 21:17:02.644531   29546 main.go:141] libmachine: (ha-797954-m03)   </features>
	I0920 21:17:02.644536   29546 main.go:141] libmachine: (ha-797954-m03)   <cpu mode='host-passthrough'>
	I0920 21:17:02.644543   29546 main.go:141] libmachine: (ha-797954-m03)   
	I0920 21:17:02.644553   29546 main.go:141] libmachine: (ha-797954-m03)   </cpu>
	I0920 21:17:02.644561   29546 main.go:141] libmachine: (ha-797954-m03)   <os>
	I0920 21:17:02.644566   29546 main.go:141] libmachine: (ha-797954-m03)     <type>hvm</type>
	I0920 21:17:02.644573   29546 main.go:141] libmachine: (ha-797954-m03)     <boot dev='cdrom'/>
	I0920 21:17:02.644577   29546 main.go:141] libmachine: (ha-797954-m03)     <boot dev='hd'/>
	I0920 21:17:02.644582   29546 main.go:141] libmachine: (ha-797954-m03)     <bootmenu enable='no'/>
	I0920 21:17:02.644588   29546 main.go:141] libmachine: (ha-797954-m03)   </os>
	I0920 21:17:02.644595   29546 main.go:141] libmachine: (ha-797954-m03)   <devices>
	I0920 21:17:02.644606   29546 main.go:141] libmachine: (ha-797954-m03)     <disk type='file' device='cdrom'>
	I0920 21:17:02.644627   29546 main.go:141] libmachine: (ha-797954-m03)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/boot2docker.iso'/>
	I0920 21:17:02.644643   29546 main.go:141] libmachine: (ha-797954-m03)       <target dev='hdc' bus='scsi'/>
	I0920 21:17:02.644649   29546 main.go:141] libmachine: (ha-797954-m03)       <readonly/>
	I0920 21:17:02.644656   29546 main.go:141] libmachine: (ha-797954-m03)     </disk>
	I0920 21:17:02.644661   29546 main.go:141] libmachine: (ha-797954-m03)     <disk type='file' device='disk'>
	I0920 21:17:02.644670   29546 main.go:141] libmachine: (ha-797954-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 21:17:02.644678   29546 main.go:141] libmachine: (ha-797954-m03)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/ha-797954-m03.rawdisk'/>
	I0920 21:17:02.644684   29546 main.go:141] libmachine: (ha-797954-m03)       <target dev='hda' bus='virtio'/>
	I0920 21:17:02.644689   29546 main.go:141] libmachine: (ha-797954-m03)     </disk>
	I0920 21:17:02.644698   29546 main.go:141] libmachine: (ha-797954-m03)     <interface type='network'>
	I0920 21:17:02.644710   29546 main.go:141] libmachine: (ha-797954-m03)       <source network='mk-ha-797954'/>
	I0920 21:17:02.644724   29546 main.go:141] libmachine: (ha-797954-m03)       <model type='virtio'/>
	I0920 21:17:02.644734   29546 main.go:141] libmachine: (ha-797954-m03)     </interface>
	I0920 21:17:02.644742   29546 main.go:141] libmachine: (ha-797954-m03)     <interface type='network'>
	I0920 21:17:02.644747   29546 main.go:141] libmachine: (ha-797954-m03)       <source network='default'/>
	I0920 21:17:02.644754   29546 main.go:141] libmachine: (ha-797954-m03)       <model type='virtio'/>
	I0920 21:17:02.644759   29546 main.go:141] libmachine: (ha-797954-m03)     </interface>
	I0920 21:17:02.644765   29546 main.go:141] libmachine: (ha-797954-m03)     <serial type='pty'>
	I0920 21:17:02.644770   29546 main.go:141] libmachine: (ha-797954-m03)       <target port='0'/>
	I0920 21:17:02.644776   29546 main.go:141] libmachine: (ha-797954-m03)     </serial>
	I0920 21:17:02.644783   29546 main.go:141] libmachine: (ha-797954-m03)     <console type='pty'>
	I0920 21:17:02.644798   29546 main.go:141] libmachine: (ha-797954-m03)       <target type='serial' port='0'/>
	I0920 21:17:02.644809   29546 main.go:141] libmachine: (ha-797954-m03)     </console>
	I0920 21:17:02.644818   29546 main.go:141] libmachine: (ha-797954-m03)     <rng model='virtio'>
	I0920 21:17:02.644842   29546 main.go:141] libmachine: (ha-797954-m03)       <backend model='random'>/dev/random</backend>
	I0920 21:17:02.644850   29546 main.go:141] libmachine: (ha-797954-m03)     </rng>
	I0920 21:17:02.644855   29546 main.go:141] libmachine: (ha-797954-m03)     
	I0920 21:17:02.644858   29546 main.go:141] libmachine: (ha-797954-m03)     
	I0920 21:17:02.644865   29546 main.go:141] libmachine: (ha-797954-m03)   </devices>
	I0920 21:17:02.644875   29546 main.go:141] libmachine: (ha-797954-m03) </domain>
	I0920 21:17:02.644883   29546 main.go:141] libmachine: (ha-797954-m03) 
	I0920 21:17:02.651978   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:63:a3:c0 in network default
	I0920 21:17:02.652689   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:02.652709   29546 main.go:141] libmachine: (ha-797954-m03) Ensuring networks are active...
	I0920 21:17:02.653425   29546 main.go:141] libmachine: (ha-797954-m03) Ensuring network default is active
	I0920 21:17:02.653794   29546 main.go:141] libmachine: (ha-797954-m03) Ensuring network mk-ha-797954 is active
	I0920 21:17:02.654206   29546 main.go:141] libmachine: (ha-797954-m03) Getting domain xml...
	I0920 21:17:02.654891   29546 main.go:141] libmachine: (ha-797954-m03) Creating domain...
	I0920 21:17:03.891211   29546 main.go:141] libmachine: (ha-797954-m03) Waiting to get IP...
	I0920 21:17:03.891998   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:03.892564   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:03.892584   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:03.892529   30348 retry.go:31] will retry after 249.905879ms: waiting for machine to come up
	I0920 21:17:04.144187   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:04.144615   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:04.144661   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:04.144597   30348 retry.go:31] will retry after 316.328897ms: waiting for machine to come up
	I0920 21:17:04.462062   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:04.462511   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:04.462533   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:04.462462   30348 retry.go:31] will retry after 364.78104ms: waiting for machine to come up
	I0920 21:17:04.828892   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:04.829305   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:04.829331   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:04.829266   30348 retry.go:31] will retry after 450.317187ms: waiting for machine to come up
	I0920 21:17:05.280860   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:05.281373   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:05.281406   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:05.281315   30348 retry.go:31] will retry after 476.652629ms: waiting for machine to come up
	I0920 21:17:05.760024   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:05.760485   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:05.760518   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:05.760438   30348 retry.go:31] will retry after 666.650329ms: waiting for machine to come up
	I0920 21:17:06.428088   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:06.428570   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:06.428605   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:06.428544   30348 retry.go:31] will retry after 895.276152ms: waiting for machine to come up
	I0920 21:17:07.325647   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:07.326053   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:07.326072   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:07.326016   30348 retry.go:31] will retry after 1.229974937s: waiting for machine to come up
	I0920 21:17:08.557784   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:08.558294   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:08.558321   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:08.558256   30348 retry.go:31] will retry after 1.130091493s: waiting for machine to come up
	I0920 21:17:09.689958   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:09.690504   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:09.690534   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:09.690449   30348 retry.go:31] will retry after 1.704043253s: waiting for machine to come up
	I0920 21:17:11.396325   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:11.396797   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:11.396820   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:11.396759   30348 retry.go:31] will retry after 2.089116318s: waiting for machine to come up
	I0920 21:17:13.487459   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:13.487939   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:13.487964   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:13.487890   30348 retry.go:31] will retry after 2.803235066s: waiting for machine to come up
	I0920 21:17:16.292437   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:16.292814   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:16.292830   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:16.292780   30348 retry.go:31] will retry after 3.964680632s: waiting for machine to come up
	I0920 21:17:20.260043   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:20.260437   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find current IP address of domain ha-797954-m03 in network mk-ha-797954
	I0920 21:17:20.260457   29546 main.go:141] libmachine: (ha-797954-m03) DBG | I0920 21:17:20.260397   30348 retry.go:31] will retry after 5.26446379s: waiting for machine to come up
	I0920 21:17:25.528157   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:25.528572   29546 main.go:141] libmachine: (ha-797954-m03) Found IP for machine: 192.168.39.108
	I0920 21:17:25.528600   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has current primary IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:25.528606   29546 main.go:141] libmachine: (ha-797954-m03) Reserving static IP address...
	I0920 21:17:25.529052   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find host DHCP lease matching {name: "ha-797954-m03", mac: "52:54:00:37:e1:1a", ip: "192.168.39.108"} in network mk-ha-797954
	I0920 21:17:25.602757   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Getting to WaitForSSH function...
	I0920 21:17:25.602780   29546 main.go:141] libmachine: (ha-797954-m03) Reserved static IP address: 192.168.39.108
	I0920 21:17:25.602810   29546 main.go:141] libmachine: (ha-797954-m03) Waiting for SSH to be available...
	I0920 21:17:25.605777   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:25.606165   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954
	I0920 21:17:25.606196   29546 main.go:141] libmachine: (ha-797954-m03) DBG | unable to find defined IP address of network mk-ha-797954 interface with MAC address 52:54:00:37:e1:1a
	I0920 21:17:25.606399   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using SSH client type: external
	I0920 21:17:25.606418   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa (-rw-------)
	I0920 21:17:25.606467   29546 main.go:141] libmachine: (ha-797954-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:17:25.606479   29546 main.go:141] libmachine: (ha-797954-m03) DBG | About to run SSH command:
	I0920 21:17:25.606491   29546 main.go:141] libmachine: (ha-797954-m03) DBG | exit 0
	I0920 21:17:25.610090   29546 main.go:141] libmachine: (ha-797954-m03) DBG | SSH cmd err, output: exit status 255: 
	I0920 21:17:25.610107   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0920 21:17:25.610116   29546 main.go:141] libmachine: (ha-797954-m03) DBG | command : exit 0
	I0920 21:17:25.610123   29546 main.go:141] libmachine: (ha-797954-m03) DBG | err     : exit status 255
	I0920 21:17:25.610132   29546 main.go:141] libmachine: (ha-797954-m03) DBG | output  : 
	I0920 21:17:28.610361   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Getting to WaitForSSH function...
	I0920 21:17:28.612774   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.613300   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:28.613345   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.613482   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using SSH client type: external
	I0920 21:17:28.613508   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa (-rw-------)
	I0920 21:17:28.613536   29546 main.go:141] libmachine: (ha-797954-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.108 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 21:17:28.613547   29546 main.go:141] libmachine: (ha-797954-m03) DBG | About to run SSH command:
	I0920 21:17:28.613558   29546 main.go:141] libmachine: (ha-797954-m03) DBG | exit 0
	I0920 21:17:28.739518   29546 main.go:141] libmachine: (ha-797954-m03) DBG | SSH cmd err, output: <nil>: 
	I0920 21:17:28.739782   29546 main.go:141] libmachine: (ha-797954-m03) KVM machine creation complete!
	I0920 21:17:28.740050   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetConfigRaw
	I0920 21:17:28.740606   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:28.740766   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:28.740895   29546 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 21:17:28.740905   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetState
	I0920 21:17:28.742248   29546 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 21:17:28.742261   29546 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 21:17:28.742266   29546 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 21:17:28.742271   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:28.744450   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.744848   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:28.744878   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.745030   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:28.745195   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.745319   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.745434   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:28.745605   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:28.745786   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:28.745797   29546 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 21:17:28.858225   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:17:28.858252   29546 main.go:141] libmachine: Detecting the provisioner...
	I0920 21:17:28.858263   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:28.860945   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.861330   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:28.861367   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.861582   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:28.861765   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.861923   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.862049   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:28.862220   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:28.862409   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:28.862421   29546 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 21:17:28.971754   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 21:17:28.971833   29546 main.go:141] libmachine: found compatible host: buildroot
	I0920 21:17:28.971843   29546 main.go:141] libmachine: Provisioning with buildroot...
	I0920 21:17:28.971858   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetMachineName
	I0920 21:17:28.972111   29546 buildroot.go:166] provisioning hostname "ha-797954-m03"
	I0920 21:17:28.972133   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetMachineName
	I0920 21:17:28.972282   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:28.974838   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.975230   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:28.975254   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:28.975370   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:28.975557   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.975667   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:28.975797   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:28.975919   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:28.976108   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:28.976126   29546 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-797954-m03 && echo "ha-797954-m03" | sudo tee /etc/hostname
	I0920 21:17:29.102868   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-797954-m03
	
	I0920 21:17:29.102918   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.106396   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.106814   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.106838   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.107105   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:29.107316   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.107475   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.107629   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:29.107783   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:29.107967   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:29.107983   29546 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-797954-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-797954-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-797954-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 21:17:29.224170   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:17:29.224200   29546 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 21:17:29.224221   29546 buildroot.go:174] setting up certificates
	I0920 21:17:29.224238   29546 provision.go:84] configureAuth start
	I0920 21:17:29.224249   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetMachineName
	I0920 21:17:29.224497   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetIP
	I0920 21:17:29.228199   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.228591   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.228621   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.228737   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.230940   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.231290   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.231315   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.231436   29546 provision.go:143] copyHostCerts
	I0920 21:17:29.231461   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:17:29.231488   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 21:17:29.231497   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:17:29.231560   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 21:17:29.231661   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:17:29.231679   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 21:17:29.231683   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:17:29.231706   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 21:17:29.231764   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:17:29.231786   29546 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 21:17:29.231795   29546 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:17:29.231816   29546 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 21:17:29.231871   29546 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.ha-797954-m03 san=[127.0.0.1 192.168.39.108 ha-797954-m03 localhost minikube]
	I0920 21:17:29.482373   29546 provision.go:177] copyRemoteCerts
	I0920 21:17:29.482434   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 21:17:29.482467   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.485158   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.485535   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.485562   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.485735   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:29.485957   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.486154   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:29.486290   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa Username:docker}
	I0920 21:17:29.571854   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 21:17:29.571937   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 21:17:29.597446   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 21:17:29.597536   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 21:17:29.623922   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 21:17:29.624016   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 21:17:29.649582   29546 provision.go:87] duration metric: took 425.330124ms to configureAuth
	I0920 21:17:29.649619   29546 buildroot.go:189] setting minikube options for container-runtime
	I0920 21:17:29.649891   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:17:29.650000   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.652875   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.653331   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.653362   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.653612   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:29.653807   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.653962   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.654098   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:29.654239   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:29.654427   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:29.654450   29546 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 21:17:29.884767   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 21:17:29.884803   29546 main.go:141] libmachine: Checking connection to Docker...
	I0920 21:17:29.884813   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetURL
	I0920 21:17:29.886065   29546 main.go:141] libmachine: (ha-797954-m03) DBG | Using libvirt version 6000000
	I0920 21:17:29.888305   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.888643   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.888670   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.888892   29546 main.go:141] libmachine: Docker is up and running!
	I0920 21:17:29.888907   29546 main.go:141] libmachine: Reticulating splines...
	I0920 21:17:29.888914   29546 client.go:171] duration metric: took 27.541211935s to LocalClient.Create
	I0920 21:17:29.888937   29546 start.go:167] duration metric: took 27.541269184s to libmachine.API.Create "ha-797954"
	I0920 21:17:29.888947   29546 start.go:293] postStartSetup for "ha-797954-m03" (driver="kvm2")
	I0920 21:17:29.888957   29546 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 21:17:29.888974   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:29.889224   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 21:17:29.889254   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:29.891311   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.891621   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:29.891640   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:29.891801   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:29.891981   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:29.892102   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:29.892225   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa Username:docker}
	I0920 21:17:29.978349   29546 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 21:17:29.983427   29546 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 21:17:29.983455   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 21:17:29.983538   29546 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 21:17:29.983634   29546 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 21:17:29.983646   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /etc/ssl/certs/167852.pem
	I0920 21:17:29.983753   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 21:17:29.995291   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:17:30.022903   29546 start.go:296] duration metric: took 133.944509ms for postStartSetup
	I0920 21:17:30.022958   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetConfigRaw
	I0920 21:17:30.023537   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetIP
	I0920 21:17:30.026168   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.026517   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.026545   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.026878   29546 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:17:30.027177   29546 start.go:128] duration metric: took 27.698265683s to createHost
	I0920 21:17:30.027205   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:30.029751   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.030099   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.030120   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.030296   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:30.030478   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:30.030638   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:30.030742   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:30.030890   29546 main.go:141] libmachine: Using SSH client type: native
	I0920 21:17:30.031075   29546 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.108 22 <nil> <nil>}
	I0920 21:17:30.031086   29546 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 21:17:30.140028   29546 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726867050.118100270
	
	I0920 21:17:30.140050   29546 fix.go:216] guest clock: 1726867050.118100270
	I0920 21:17:30.140059   29546 fix.go:229] Guest: 2024-09-20 21:17:30.11810027 +0000 UTC Remote: 2024-09-20 21:17:30.027192785 +0000 UTC m=+151.968393428 (delta=90.907485ms)
	I0920 21:17:30.140081   29546 fix.go:200] guest clock delta is within tolerance: 90.907485ms
	I0920 21:17:30.140087   29546 start.go:83] releasing machines lock for "ha-797954-m03", held for 27.811283914s
	I0920 21:17:30.140109   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:30.140335   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetIP
	I0920 21:17:30.142848   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.143223   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.143247   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.145549   29546 out.go:177] * Found network options:
	I0920 21:17:30.147066   29546 out.go:177]   - NO_PROXY=192.168.39.240,192.168.39.63
	W0920 21:17:30.148256   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 21:17:30.148273   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 21:17:30.148287   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:30.148801   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:30.149026   29546 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:17:30.149134   29546 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 21:17:30.149168   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	W0920 21:17:30.149244   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	W0920 21:17:30.149266   29546 proxy.go:119] fail to check proxy env: Error ip not in block
	I0920 21:17:30.149325   29546 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 21:17:30.149344   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:17:30.151785   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.152017   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.152092   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.152116   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.152242   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:30.152398   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:30.152420   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:30.152411   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:30.152594   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:30.152619   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:17:30.152752   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:17:30.152750   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa Username:docker}
	I0920 21:17:30.152866   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:17:30.152988   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa Username:docker}
	I0920 21:17:30.415462   29546 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 21:17:30.422013   29546 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 21:17:30.422074   29546 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 21:17:30.439757   29546 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 21:17:30.439785   29546 start.go:495] detecting cgroup driver to use...
	I0920 21:17:30.439851   29546 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 21:17:30.457470   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 21:17:30.472261   29546 docker.go:217] disabling cri-docker service (if available) ...
	I0920 21:17:30.472327   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 21:17:30.488825   29546 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 21:17:30.504407   29546 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 21:17:30.628900   29546 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 21:17:30.786553   29546 docker.go:233] disabling docker service ...
	I0920 21:17:30.786634   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 21:17:30.801796   29546 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 21:17:30.815995   29546 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 21:17:30.961390   29546 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 21:17:31.087877   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 21:17:31.103154   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 21:17:31.123733   29546 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 21:17:31.123806   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.135304   29546 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 21:17:31.135368   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.147157   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.158091   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.169385   29546 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 21:17:31.181217   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.192527   29546 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.210306   29546 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:17:31.221334   29546 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 21:17:31.233227   29546 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 21:17:31.233302   29546 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 21:17:31.249920   29546 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 21:17:31.260187   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:17:31.382201   29546 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 21:17:31.482956   29546 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 21:17:31.483037   29546 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 21:17:31.488476   29546 start.go:563] Will wait 60s for crictl version
	I0920 21:17:31.488554   29546 ssh_runner.go:195] Run: which crictl
	I0920 21:17:31.492584   29546 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 21:17:31.538709   29546 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 21:17:31.538785   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:17:31.572182   29546 ssh_runner.go:195] Run: crio --version
	I0920 21:17:31.603894   29546 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 21:17:31.605343   29546 out.go:177]   - env NO_PROXY=192.168.39.240
	I0920 21:17:31.606667   29546 out.go:177]   - env NO_PROXY=192.168.39.240,192.168.39.63
	I0920 21:17:31.607748   29546 main.go:141] libmachine: (ha-797954-m03) Calling .GetIP
	I0920 21:17:31.610281   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:31.610645   29546 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:17:31.610679   29546 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:17:31.610868   29546 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 21:17:31.615319   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:17:31.628360   29546 mustload.go:65] Loading cluster: ha-797954
	I0920 21:17:31.628620   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:17:31.628963   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:17:31.629005   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:17:31.643946   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38889
	I0920 21:17:31.644427   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:17:31.644862   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:17:31.644878   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:17:31.645201   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:17:31.645370   29546 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:17:31.646870   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:17:31.647201   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:17:31.647238   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:17:31.662472   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37679
	I0920 21:17:31.662941   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:17:31.663429   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:17:31.663454   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:17:31.663746   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:17:31.663910   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:17:31.664076   29546 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954 for IP: 192.168.39.108
	I0920 21:17:31.664086   29546 certs.go:194] generating shared ca certs ...
	I0920 21:17:31.664103   29546 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:17:31.664243   29546 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 21:17:31.664295   29546 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 21:17:31.664308   29546 certs.go:256] generating profile certs ...
	I0920 21:17:31.664400   29546 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key
	I0920 21:17:31.664432   29546 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9a277b8a
	I0920 21:17:31.664451   29546 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9a277b8a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.63 192.168.39.108 192.168.39.254]
	I0920 21:17:31.999984   29546 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9a277b8a ...
	I0920 21:17:32.000017   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9a277b8a: {Name:mkb265b9a0e489fa9c79757fad17ff69e435c7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:17:32.000213   29546 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9a277b8a ...
	I0920 21:17:32.000228   29546 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9a277b8a: {Name:mk577de20bc228548ac6019ec38c5032817178c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:17:32.000340   29546 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9a277b8a -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt
	I0920 21:17:32.000512   29546 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9a277b8a -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key
	I0920 21:17:32.000680   29546 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key
	I0920 21:17:32.000701   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 21:17:32.000718   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 21:17:32.000728   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 21:17:32.000741   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 21:17:32.000756   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 21:17:32.000773   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 21:17:32.000790   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 21:17:32.019102   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 21:17:32.019214   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 21:17:32.019253   29546 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 21:17:32.019263   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 21:17:32.019285   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 21:17:32.019307   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 21:17:32.019328   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 21:17:32.019367   29546 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:17:32.019392   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /usr/share/ca-certificates/167852.pem
	I0920 21:17:32.019407   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:17:32.019422   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem -> /usr/share/ca-certificates/16785.pem
	I0920 21:17:32.019452   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:17:32.022485   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:17:32.022895   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:17:32.022924   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:17:32.023142   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:17:32.023341   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:17:32.023495   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:17:32.023639   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:17:32.099301   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0920 21:17:32.104861   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0920 21:17:32.119922   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0920 21:17:32.124156   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0920 21:17:32.134999   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0920 21:17:32.139238   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0920 21:17:32.149597   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0920 21:17:32.154030   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0920 21:17:32.167439   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0920 21:17:32.172928   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0920 21:17:32.184258   29546 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0920 21:17:32.189404   29546 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0920 21:17:32.201402   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 21:17:32.227964   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 21:17:32.252476   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 21:17:32.280434   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 21:17:32.305098   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0920 21:17:32.330545   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 21:17:32.355250   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 21:17:32.380481   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 21:17:32.409052   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 21:17:32.434675   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 21:17:32.460671   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 21:17:32.485722   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0920 21:17:32.503366   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0920 21:17:32.520848   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0920 21:17:32.537950   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0920 21:17:32.555973   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0920 21:17:32.574604   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0920 21:17:32.592470   29546 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0920 21:17:32.612075   29546 ssh_runner.go:195] Run: openssl version
	I0920 21:17:32.618522   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 21:17:32.629763   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:17:32.634570   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:17:32.634649   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:17:32.640735   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 21:17:32.651846   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 21:17:32.663634   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 21:17:32.668436   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:17:32.668505   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 21:17:32.674910   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 21:17:32.686102   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 21:17:32.697672   29546 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 21:17:32.702132   29546 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:17:32.702197   29546 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 21:17:32.707913   29546 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 21:17:32.719066   29546 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:17:32.723392   29546 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 21:17:32.723441   29546 kubeadm.go:934] updating node {m03 192.168.39.108 8443 v1.31.1 crio true true} ...
	I0920 21:17:32.723510   29546 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-797954-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.108
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 21:17:32.723543   29546 kube-vip.go:115] generating kube-vip config ...
	I0920 21:17:32.723582   29546 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 21:17:32.740751   29546 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 21:17:32.740840   29546 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 21:17:32.740906   29546 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 21:17:32.750919   29546 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0920 21:17:32.750992   29546 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0920 21:17:32.760956   29546 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0920 21:17:32.760966   29546 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I0920 21:17:32.760984   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 21:17:32.760984   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 21:17:32.761008   29546 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I0920 21:17:32.761029   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0920 21:17:32.761047   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0920 21:17:32.761047   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:17:32.770258   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0920 21:17:32.770302   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0920 21:17:32.770469   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0920 21:17:32.770497   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0920 21:17:32.785590   29546 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 21:17:32.785720   29546 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0920 21:17:32.887818   29546 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0920 21:17:32.887866   29546 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0920 21:17:33.724236   29546 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0920 21:17:33.735002   29546 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0920 21:17:33.752570   29546 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 21:17:33.770318   29546 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 21:17:33.790184   29546 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 21:17:33.794843   29546 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 21:17:33.808084   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:17:33.931534   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:17:33.948935   29546 host.go:66] Checking if "ha-797954" exists ...
	I0920 21:17:33.949425   29546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:17:33.949481   29546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:17:33.965177   29546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35431
	I0920 21:17:33.966119   29546 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:17:33.966693   29546 main.go:141] libmachine: Using API Version  1
	I0920 21:17:33.966718   29546 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:17:33.967135   29546 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:17:33.967319   29546 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:17:33.967487   29546 start.go:317] joinCluster: &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:17:33.967629   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0920 21:17:33.967652   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:17:33.970529   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:17:33.970999   29546 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:17:33.971028   29546 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:17:33.971157   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:17:33.971336   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:17:33.971484   29546 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:17:33.971611   29546 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:17:34.158062   29546 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:17:34.158116   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token au5bfe.w4m98std58j4wmo1 --discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-797954-m03 --control-plane --apiserver-advertise-address=192.168.39.108 --apiserver-bind-port=8443"
	I0920 21:17:56.483713   29546 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token au5bfe.w4m98std58j4wmo1 --discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-797954-m03 --control-plane --apiserver-advertise-address=192.168.39.108 --apiserver-bind-port=8443": (22.325561758s)
	I0920 21:17:56.483748   29546 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0920 21:17:57.184683   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-797954-m03 minikube.k8s.io/updated_at=2024_09_20T21_17_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=ha-797954 minikube.k8s.io/primary=false
	I0920 21:17:57.333060   29546 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-797954-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0920 21:17:57.459382   29546 start.go:319] duration metric: took 23.491890367s to joinCluster
	I0920 21:17:57.459461   29546 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 21:17:57.459752   29546 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:17:57.461273   29546 out.go:177] * Verifying Kubernetes components...
	I0920 21:17:57.463650   29546 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:17:57.782773   29546 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:17:57.853744   29546 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:17:57.854084   29546 kapi.go:59] client config for ha-797954: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key", CAFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0920 21:17:57.854172   29546 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.240:8443
	I0920 21:17:57.854425   29546 node_ready.go:35] waiting up to 6m0s for node "ha-797954-m03" to be "Ready" ...
	I0920 21:17:57.854502   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:57.854511   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:57.854521   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:57.854526   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:57.858388   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:58.354955   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:58.355019   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:58.355031   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:58.355038   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:58.358738   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:58.855362   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:58.855392   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:58.855404   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:58.855410   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:58.859676   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:59.355007   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:59.355039   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:59.355047   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:59.355051   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:59.358804   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:17:59.855573   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:17:59.855597   29546 round_trippers.go:469] Request Headers:
	I0920 21:17:59.855608   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:17:59.855615   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:17:59.859651   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:17:59.860298   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:00.355648   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:00.355670   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:00.355679   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:00.355683   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:00.359549   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:00.854939   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:00.854979   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:00.854989   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:00.854994   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:00.858910   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:01.354629   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:01.354654   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:01.354665   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:01.354671   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:01.358103   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:01.855123   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:01.855148   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:01.855160   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:01.855167   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:01.858757   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:02.355433   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:02.355460   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:02.355470   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:02.355478   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:02.358859   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:02.359609   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:02.854611   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:02.854637   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:02.854646   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:02.854651   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:02.858023   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:03.355696   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:03.355726   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:03.355738   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:03.355743   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:03.359438   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:03.854679   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:03.854705   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:03.854714   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:03.854718   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:03.858836   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:04.355520   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:04.355553   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:04.355565   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:04.355572   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:04.359190   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:04.359707   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:04.855114   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:04.855137   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:04.855146   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:04.855151   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:04.859053   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:05.355120   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:05.355148   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:05.355160   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:05.355166   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:05.359138   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:05.855384   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:05.855416   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:05.855429   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:05.855436   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:05.860675   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:18:06.354757   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:06.354801   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:06.354812   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:06.354818   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:06.358624   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:06.854603   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:06.854626   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:06.854636   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:06.854654   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:06.868553   29546 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0920 21:18:06.869190   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:07.354649   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:07.354676   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:07.354687   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:07.354700   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:07.359137   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:07.855272   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:07.855298   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:07.855322   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:07.855332   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:07.859818   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:08.354945   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:08.354975   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:08.354983   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:08.354987   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:08.358743   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:08.854654   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:08.854694   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:08.854705   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:08.854726   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:08.858133   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:09.354680   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:09.354702   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:09.354708   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:09.354711   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:09.358286   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:09.358822   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:09.855309   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:09.855334   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:09.855343   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:09.855347   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:09.859448   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:10.355394   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:10.355413   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:10.355423   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:10.355429   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:10.360033   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:10.855581   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:10.855604   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:10.855615   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:10.855622   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:10.859498   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:11.355312   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:11.355337   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.355345   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.355349   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.358651   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:11.359127   29546 node_ready.go:53] node "ha-797954-m03" has status "Ready":"False"
	I0920 21:18:11.855144   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:11.855174   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.855184   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.855209   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.859059   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:11.859886   29546 node_ready.go:49] node "ha-797954-m03" has status "Ready":"True"
	I0920 21:18:11.859907   29546 node_ready.go:38] duration metric: took 14.005464709s for node "ha-797954-m03" to be "Ready" ...
	I0920 21:18:11.859917   29546 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 21:18:11.859993   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:11.860006   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.860016   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.860021   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.868162   29546 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 21:18:11.876454   29546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.876533   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-bkcf8
	I0920 21:18:11.876538   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.876545   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.876552   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.889831   29546 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0920 21:18:11.890530   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:11.890547   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.890556   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.890559   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.899152   29546 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 21:18:11.899662   29546 pod_ready.go:93] pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:11.899680   29546 pod_ready.go:82] duration metric: took 23.198308ms for pod "coredns-7c65d6cfc9-bkcf8" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.899690   29546 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.899746   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-djhfr
	I0920 21:18:11.899753   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.899760   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.899765   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.905423   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:18:11.906584   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:11.906599   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.906607   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.906612   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.909991   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:11.910558   29546 pod_ready.go:93] pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:11.910576   29546 pod_ready.go:82] duration metric: took 10.880105ms for pod "coredns-7c65d6cfc9-djhfr" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.910585   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.910644   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954
	I0920 21:18:11.910653   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.910660   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.910665   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.917040   29546 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 21:18:11.917709   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:11.917723   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.917731   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.917734   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.926063   29546 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0920 21:18:11.926748   29546 pod_ready.go:93] pod "etcd-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:11.926766   29546 pod_ready.go:82] duration metric: took 16.17481ms for pod "etcd-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.926778   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.926852   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954-m02
	I0920 21:18:11.926862   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.926872   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.926879   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.932632   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:18:11.933455   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:11.933475   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:11.933486   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:11.933492   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:11.936415   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:18:11.936956   29546 pod_ready.go:93] pod "etcd-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:11.936977   29546 pod_ready.go:82] duration metric: took 10.190753ms for pod "etcd-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:11.936989   29546 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.055265   29546 request.go:632] Waited for 118.210581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954-m03
	I0920 21:18:12.055358   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/etcd-ha-797954-m03
	I0920 21:18:12.055367   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.055378   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.055387   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.060304   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:12.255635   29546 request.go:632] Waited for 194.221335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:12.255716   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:12.255723   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.255734   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.255739   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.259841   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:12.260392   29546 pod_ready.go:93] pod "etcd-ha-797954-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:12.260414   29546 pod_ready.go:82] duration metric: took 323.41646ms for pod "etcd-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.260439   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.455886   29546 request.go:632] Waited for 195.375581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954
	I0920 21:18:12.455950   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954
	I0920 21:18:12.455955   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.455963   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.455971   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.459141   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:12.656082   29546 request.go:632] Waited for 196.149744ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:12.656147   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:12.656164   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.656172   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.656176   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.659742   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:12.660232   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:12.660250   29546 pod_ready.go:82] duration metric: took 399.803876ms for pod "kube-apiserver-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.660259   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:12.855251   29546 request.go:632] Waited for 194.932878ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m02
	I0920 21:18:12.855338   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m02
	I0920 21:18:12.855348   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:12.855357   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:12.855369   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:12.861682   29546 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 21:18:13.055384   29546 request.go:632] Waited for 192.289197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:13.055437   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:13.055442   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.055449   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.055454   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.058687   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.059244   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:13.059263   29546 pod_ready.go:82] duration metric: took 398.997761ms for pod "kube-apiserver-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.059272   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.256048   29546 request.go:632] Waited for 196.689932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m03
	I0920 21:18:13.256118   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-797954-m03
	I0920 21:18:13.256126   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.256135   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.256141   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.259220   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.455932   29546 request.go:632] Waited for 195.902068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:13.456005   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:13.456028   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.456039   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.456043   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.459696   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.460211   29546 pod_ready.go:93] pod "kube-apiserver-ha-797954-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:13.460227   29546 pod_ready.go:82] duration metric: took 400.948779ms for pod "kube-apiserver-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.460237   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.655197   29546 request.go:632] Waited for 194.89988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954
	I0920 21:18:13.655267   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954
	I0920 21:18:13.655282   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.655309   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.655321   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.658570   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.855939   29546 request.go:632] Waited for 196.380436ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:13.855992   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:13.855997   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:13.856004   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:13.856006   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:13.859600   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:13.860576   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:13.860591   29546 pod_ready.go:82] duration metric: took 400.349149ms for pod "kube-controller-manager-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:13.860600   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.055168   29546 request.go:632] Waited for 194.505202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m02
	I0920 21:18:14.055234   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m02
	I0920 21:18:14.055241   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.055251   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.055260   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.058275   29546 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0920 21:18:14.255766   29546 request.go:632] Waited for 196.386628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:14.255858   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:14.255870   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.255879   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.255887   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.259522   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:14.260240   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:14.260258   29546 pod_ready.go:82] duration metric: took 399.650728ms for pod "kube-controller-manager-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.260270   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.455432   29546 request.go:632] Waited for 195.087635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m03
	I0920 21:18:14.455501   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-797954-m03
	I0920 21:18:14.455506   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.455516   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.455526   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.458903   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:14.655253   29546 request.go:632] Waited for 195.299076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:14.655305   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:14.655310   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.655317   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.655324   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.658799   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:14.659327   29546 pod_ready.go:93] pod "kube-controller-manager-ha-797954-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:14.659347   29546 pod_ready.go:82] duration metric: took 399.070103ms for pod "kube-controller-manager-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.659357   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2pjxq" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:14.855420   29546 request.go:632] Waited for 195.985286ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pjxq
	I0920 21:18:14.855474   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2pjxq
	I0920 21:18:14.855480   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:14.855491   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:14.855500   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:14.859401   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.055482   29546 request.go:632] Waited for 195.388569ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:15.055554   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:15.055561   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.055571   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.055577   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.059246   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.059980   29546 pod_ready.go:93] pod "kube-proxy-2pjxq" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:15.060007   29546 pod_ready.go:82] duration metric: took 400.64405ms for pod "kube-proxy-2pjxq" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.060015   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-jskx5" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.256131   29546 request.go:632] Waited for 196.057373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jskx5
	I0920 21:18:15.256206   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jskx5
	I0920 21:18:15.256211   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.256217   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.256220   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.260016   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.456032   29546 request.go:632] Waited for 195.376218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:15.456111   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:15.456119   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.456127   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.456134   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.459253   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.459960   29546 pod_ready.go:93] pod "kube-proxy-jskx5" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:15.459978   29546 pod_ready.go:82] duration metric: took 399.957197ms for pod "kube-proxy-jskx5" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.459988   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qx9ph" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.656143   29546 request.go:632] Waited for 196.090186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qx9ph
	I0920 21:18:15.656203   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qx9ph
	I0920 21:18:15.656215   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.656234   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.656239   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.660293   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:15.855333   29546 request.go:632] Waited for 194.281869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:15.855386   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:15.855391   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:15.855398   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:15.855404   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:15.859011   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:15.859698   29546 pod_ready.go:93] pod "kube-proxy-qx9ph" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:15.859721   29546 pod_ready.go:82] duration metric: took 399.724688ms for pod "kube-proxy-qx9ph" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:15.859733   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.055835   29546 request.go:632] Waited for 196.022812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954
	I0920 21:18:16.055896   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954
	I0920 21:18:16.055901   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.055909   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.055916   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.059373   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:16.256147   29546 request.go:632] Waited for 196.031182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:16.256218   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954
	I0920 21:18:16.256225   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.256233   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.256236   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.259485   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:16.260205   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:16.260224   29546 pod_ready.go:82] duration metric: took 400.483331ms for pod "kube-scheduler-ha-797954" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.260233   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.456145   29546 request.go:632] Waited for 195.844042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m02
	I0920 21:18:16.456197   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m02
	I0920 21:18:16.456203   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.456211   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.456216   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.459396   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:16.655597   29546 request.go:632] Waited for 195.489863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:16.655679   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m02
	I0920 21:18:16.655686   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.655697   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.655702   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.659872   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:16.660490   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:16.660506   29546 pod_ready.go:82] duration metric: took 400.26753ms for pod "kube-scheduler-ha-797954-m02" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.660515   29546 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:16.856056   29546 request.go:632] Waited for 195.455765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m03
	I0920 21:18:16.856110   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-797954-m03
	I0920 21:18:16.856115   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:16.856123   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:16.856132   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:16.860019   29546 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0920 21:18:17.056049   29546 request.go:632] Waited for 195.339366ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:17.056101   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes/ha-797954-m03
	I0920 21:18:17.056106   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.056113   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.056117   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.060195   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:17.060851   29546 pod_ready.go:93] pod "kube-scheduler-ha-797954-m03" in "kube-system" namespace has status "Ready":"True"
	I0920 21:18:17.060872   29546 pod_ready.go:82] duration metric: took 400.349558ms for pod "kube-scheduler-ha-797954-m03" in "kube-system" namespace to be "Ready" ...
	I0920 21:18:17.060887   29546 pod_ready.go:39] duration metric: took 5.200957368s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 21:18:17.060904   29546 api_server.go:52] waiting for apiserver process to appear ...
	I0920 21:18:17.060962   29546 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 21:18:17.079611   29546 api_server.go:72] duration metric: took 19.620113639s to wait for apiserver process to appear ...
	I0920 21:18:17.079636   29546 api_server.go:88] waiting for apiserver healthz status ...
	I0920 21:18:17.079653   29546 api_server.go:253] Checking apiserver healthz at https://192.168.39.240:8443/healthz ...
	I0920 21:18:17.084062   29546 api_server.go:279] https://192.168.39.240:8443/healthz returned 200:
	ok
	I0920 21:18:17.084125   29546 round_trippers.go:463] GET https://192.168.39.240:8443/version
	I0920 21:18:17.084133   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.084141   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.084149   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.085207   29546 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0920 21:18:17.085276   29546 api_server.go:141] control plane version: v1.31.1
	I0920 21:18:17.085291   29546 api_server.go:131] duration metric: took 5.649897ms to wait for apiserver health ...
	I0920 21:18:17.085298   29546 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 21:18:17.256039   29546 request.go:632] Waited for 170.668533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:17.256105   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:17.256110   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.256118   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.256123   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.262225   29546 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0920 21:18:17.269500   29546 system_pods.go:59] 24 kube-system pods found
	I0920 21:18:17.269529   29546 system_pods.go:61] "coredns-7c65d6cfc9-bkcf8" [5d5aca96-de3a-4154-b195-b69aa45e2cfc] Running
	I0920 21:18:17.269535   29546 system_pods.go:61] "coredns-7c65d6cfc9-djhfr" [a8ca35db-3f4c-4e4c-a9e4-e12abecff539] Running
	I0920 21:18:17.269539   29546 system_pods.go:61] "etcd-ha-797954" [ce647dfd-9ff8-483e-ac64-e6cbed777dfa] Running
	I0920 21:18:17.269544   29546 system_pods.go:61] "etcd-ha-797954-m02" [d8c6a164-a169-482f-a05f-0e723b947424] Running
	I0920 21:18:17.269549   29546 system_pods.go:61] "etcd-ha-797954-m03" [8cc13a8d-b936-4a85-95a8-7cc0d0b63df0] Running
	I0920 21:18:17.269554   29546 system_pods.go:61] "kindnet-6lk2f" [ddd6c393-68ce-4fff-9f0d-6e0147e03926] Running
	I0920 21:18:17.269561   29546 system_pods.go:61] "kindnet-qh5rt" [2036234c-15c7-43da-ac52-cdac77f1a721] Running
	I0920 21:18:17.269569   29546 system_pods.go:61] "kindnet-tm8ml" [015073b8-e223-49d7-80cb-03d0ced5e0b5] Running
	I0920 21:18:17.269577   29546 system_pods.go:61] "kube-apiserver-ha-797954" [91b4d438-70c1-4335-a0eb-9d48bd470b37] Running
	I0920 21:18:17.269584   29546 system_pods.go:61] "kube-apiserver-ha-797954-m02" [d0634b88-e668-4be0-b175-543f5f496efa] Running
	I0920 21:18:17.269592   29546 system_pods.go:61] "kube-apiserver-ha-797954-m03" [567aeb50-886c-47d8-b285-d93f6ea8947f] Running
	I0920 21:18:17.269597   29546 system_pods.go:61] "kube-controller-manager-ha-797954" [7bcf4459-7067-4f10-a79d-a4cd0017fffb] Running
	I0920 21:18:17.269606   29546 system_pods.go:61] "kube-controller-manager-ha-797954-m02" [4366a95b-15ec-42c7-916d-14eb0dba314d] Running
	I0920 21:18:17.269609   29546 system_pods.go:61] "kube-controller-manager-ha-797954-m03" [b1366ddb-c6da-4dec-9da7-5544133e7d48] Running
	I0920 21:18:17.269615   29546 system_pods.go:61] "kube-proxy-2pjxq" [bbd40eaa-4d03-491a-96c6-ef4cedc6401a] Running
	I0920 21:18:17.269618   29546 system_pods.go:61] "kube-proxy-jskx5" [438fa076-db74-4876-b355-a143535b97fd] Running
	I0920 21:18:17.269624   29546 system_pods.go:61] "kube-proxy-qx9ph" [b3d20fb2-8eff-4921-add4-ee36b8fba334] Running
	I0920 21:18:17.269628   29546 system_pods.go:61] "kube-scheduler-ha-797954" [a55760dc-c4e4-4cec-b8d1-04512bd9ad33] Running
	I0920 21:18:17.269634   29546 system_pods.go:61] "kube-scheduler-ha-797954-m02" [4a53607e-2d10-4fcb-8697-bf042189871d] Running
	I0920 21:18:17.269640   29546 system_pods.go:61] "kube-scheduler-ha-797954-m03" [5586a8f2-f55f-4d05-a936-3f2a5756be15] Running
	I0920 21:18:17.269643   29546 system_pods.go:61] "kube-vip-ha-797954" [ad002c23-a891-4bf6-bbb7-bba95f43b1d0] Running
	I0920 21:18:17.269650   29546 system_pods.go:61] "kube-vip-ha-797954-m02" [09b5b6c2-96e7-43f1-a53e-21513fc048ed] Running
	I0920 21:18:17.269655   29546 system_pods.go:61] "kube-vip-ha-797954-m03" [b2310f23-42cf-4e3d-b6d3-e8b27a009742] Running
	I0920 21:18:17.269663   29546 system_pods.go:61] "storage-provisioner" [05385037-e9ea-4afb-a256-3bfae42e00f5] Running
	I0920 21:18:17.269674   29546 system_pods.go:74] duration metric: took 184.367541ms to wait for pod list to return data ...
	I0920 21:18:17.269686   29546 default_sa.go:34] waiting for default service account to be created ...
	I0920 21:18:17.455854   29546 request.go:632] Waited for 186.091197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0920 21:18:17.455909   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/default/serviceaccounts
	I0920 21:18:17.455915   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.455925   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.455937   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.460043   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:17.460164   29546 default_sa.go:45] found service account: "default"
	I0920 21:18:17.460183   29546 default_sa.go:55] duration metric: took 190.487846ms for default service account to be created ...
	I0920 21:18:17.460191   29546 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 21:18:17.655637   29546 request.go:632] Waited for 195.376962ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:17.655710   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/namespaces/kube-system/pods
	I0920 21:18:17.655718   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.655729   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.655734   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.661370   29546 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0920 21:18:17.667683   29546 system_pods.go:86] 24 kube-system pods found
	I0920 21:18:17.667707   29546 system_pods.go:89] "coredns-7c65d6cfc9-bkcf8" [5d5aca96-de3a-4154-b195-b69aa45e2cfc] Running
	I0920 21:18:17.667713   29546 system_pods.go:89] "coredns-7c65d6cfc9-djhfr" [a8ca35db-3f4c-4e4c-a9e4-e12abecff539] Running
	I0920 21:18:17.667717   29546 system_pods.go:89] "etcd-ha-797954" [ce647dfd-9ff8-483e-ac64-e6cbed777dfa] Running
	I0920 21:18:17.667721   29546 system_pods.go:89] "etcd-ha-797954-m02" [d8c6a164-a169-482f-a05f-0e723b947424] Running
	I0920 21:18:17.667724   29546 system_pods.go:89] "etcd-ha-797954-m03" [8cc13a8d-b936-4a85-95a8-7cc0d0b63df0] Running
	I0920 21:18:17.667728   29546 system_pods.go:89] "kindnet-6lk2f" [ddd6c393-68ce-4fff-9f0d-6e0147e03926] Running
	I0920 21:18:17.667733   29546 system_pods.go:89] "kindnet-qh5rt" [2036234c-15c7-43da-ac52-cdac77f1a721] Running
	I0920 21:18:17.667738   29546 system_pods.go:89] "kindnet-tm8ml" [015073b8-e223-49d7-80cb-03d0ced5e0b5] Running
	I0920 21:18:17.667744   29546 system_pods.go:89] "kube-apiserver-ha-797954" [91b4d438-70c1-4335-a0eb-9d48bd470b37] Running
	I0920 21:18:17.667753   29546 system_pods.go:89] "kube-apiserver-ha-797954-m02" [d0634b88-e668-4be0-b175-543f5f496efa] Running
	I0920 21:18:17.667762   29546 system_pods.go:89] "kube-apiserver-ha-797954-m03" [567aeb50-886c-47d8-b285-d93f6ea8947f] Running
	I0920 21:18:17.667770   29546 system_pods.go:89] "kube-controller-manager-ha-797954" [7bcf4459-7067-4f10-a79d-a4cd0017fffb] Running
	I0920 21:18:17.667776   29546 system_pods.go:89] "kube-controller-manager-ha-797954-m02" [4366a95b-15ec-42c7-916d-14eb0dba314d] Running
	I0920 21:18:17.667784   29546 system_pods.go:89] "kube-controller-manager-ha-797954-m03" [b1366ddb-c6da-4dec-9da7-5544133e7d48] Running
	I0920 21:18:17.667788   29546 system_pods.go:89] "kube-proxy-2pjxq" [bbd40eaa-4d03-491a-96c6-ef4cedc6401a] Running
	I0920 21:18:17.667792   29546 system_pods.go:89] "kube-proxy-jskx5" [438fa076-db74-4876-b355-a143535b97fd] Running
	I0920 21:18:17.667796   29546 system_pods.go:89] "kube-proxy-qx9ph" [b3d20fb2-8eff-4921-add4-ee36b8fba334] Running
	I0920 21:18:17.667802   29546 system_pods.go:89] "kube-scheduler-ha-797954" [a55760dc-c4e4-4cec-b8d1-04512bd9ad33] Running
	I0920 21:18:17.667806   29546 system_pods.go:89] "kube-scheduler-ha-797954-m02" [4a53607e-2d10-4fcb-8697-bf042189871d] Running
	I0920 21:18:17.667812   29546 system_pods.go:89] "kube-scheduler-ha-797954-m03" [5586a8f2-f55f-4d05-a936-3f2a5756be15] Running
	I0920 21:18:17.667816   29546 system_pods.go:89] "kube-vip-ha-797954" [ad002c23-a891-4bf6-bbb7-bba95f43b1d0] Running
	I0920 21:18:17.667820   29546 system_pods.go:89] "kube-vip-ha-797954-m02" [09b5b6c2-96e7-43f1-a53e-21513fc048ed] Running
	I0920 21:18:17.667823   29546 system_pods.go:89] "kube-vip-ha-797954-m03" [b2310f23-42cf-4e3d-b6d3-e8b27a009742] Running
	I0920 21:18:17.667829   29546 system_pods.go:89] "storage-provisioner" [05385037-e9ea-4afb-a256-3bfae42e00f5] Running
	I0920 21:18:17.667835   29546 system_pods.go:126] duration metric: took 207.638426ms to wait for k8s-apps to be running ...
	I0920 21:18:17.667846   29546 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 21:18:17.667895   29546 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:18:17.684265   29546 system_svc.go:56] duration metric: took 16.408361ms WaitForService to wait for kubelet
	I0920 21:18:17.684294   29546 kubeadm.go:582] duration metric: took 20.224798232s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 21:18:17.684316   29546 node_conditions.go:102] verifying NodePressure condition ...
	I0920 21:18:17.855695   29546 request.go:632] Waited for 171.306025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.240:8443/api/v1/nodes
	I0920 21:18:17.855757   29546 round_trippers.go:463] GET https://192.168.39.240:8443/api/v1/nodes
	I0920 21:18:17.855762   29546 round_trippers.go:469] Request Headers:
	I0920 21:18:17.855769   29546 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0920 21:18:17.855773   29546 round_trippers.go:473]     Accept: application/json, */*
	I0920 21:18:17.860127   29546 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0920 21:18:17.861061   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:18:17.861078   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:18:17.861097   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:18:17.861107   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:18:17.861116   29546 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 21:18:17.861121   29546 node_conditions.go:123] node cpu capacity is 2
	I0920 21:18:17.861127   29546 node_conditions.go:105] duration metric: took 176.804918ms to run NodePressure ...
	I0920 21:18:17.861140   29546 start.go:241] waiting for startup goroutines ...
	I0920 21:18:17.861159   29546 start.go:255] writing updated cluster config ...
	I0920 21:18:17.861474   29546 ssh_runner.go:195] Run: rm -f paused
	I0920 21:18:17.914152   29546 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 21:18:17.916981   29546 out.go:177] * Done! kubectl is now configured to use "ha-797954" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 21:22:07 ha-797954 crio[666]: time="2024-09-20 21:22:07.973181790Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867327973155458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c57ce843-3b56-476b-87d4-d1a642e40a82 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:07 ha-797954 crio[666]: time="2024-09-20 21:22:07.973782758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1102386d-bea4-44c6-a8cd-97546b8d9539 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:07 ha-797954 crio[666]: time="2024-09-20 21:22:07.973864070Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1102386d-bea4-44c6-a8cd-97546b8d9539 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:07 ha-797954 crio[666]: time="2024-09-20 21:22:07.974121209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867101601325723,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958883518913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958850406855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79,PodSandboxId:0ac27cef2729c0df7d987404a63f883a30be782372032aa31fcbe8a3dfa300a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726866958825667527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268669
46745728301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726866946560910908,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb,PodSandboxId:2ced8553e30128303b22fe9ff0bf8119fefc9729ff510070c138297b2edfcbb3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726866936873742655,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a54da6e69705211cd5f4d070f63bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726866934980260113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360,PodSandboxId:2c27a0b835060dbb1899a4af4c05a4fd816335f9b3f9fac0a5d7248e6ba3cbab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726866934952064340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726866934988370531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674,PodSandboxId:db69670c3a315720cd48ed17cf71e606baf7c5d3d62a1ff75d4dd1559ab9da38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726866934917412851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1102386d-bea4-44c6-a8cd-97546b8d9539 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.014694346Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=18a2b86a-4cd8-4538-84ae-a206f222e80b name=/runtime.v1.RuntimeService/Version
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.014786367Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=18a2b86a-4cd8-4538-84ae-a206f222e80b name=/runtime.v1.RuntimeService/Version
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.015754532Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e66dcdc3-5064-4736-8dd7-5329034caee3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.016215515Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867328016192802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e66dcdc3-5064-4736-8dd7-5329034caee3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.016689922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1460c2ae-ce73-449a-a494-740e5ca44bb4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.016776212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1460c2ae-ce73-449a-a494-740e5ca44bb4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.017905838Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867101601325723,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958883518913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958850406855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79,PodSandboxId:0ac27cef2729c0df7d987404a63f883a30be782372032aa31fcbe8a3dfa300a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726866958825667527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268669
46745728301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726866946560910908,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb,PodSandboxId:2ced8553e30128303b22fe9ff0bf8119fefc9729ff510070c138297b2edfcbb3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726866936873742655,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a54da6e69705211cd5f4d070f63bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726866934980260113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360,PodSandboxId:2c27a0b835060dbb1899a4af4c05a4fd816335f9b3f9fac0a5d7248e6ba3cbab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726866934952064340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726866934988370531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674,PodSandboxId:db69670c3a315720cd48ed17cf71e606baf7c5d3d62a1ff75d4dd1559ab9da38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726866934917412851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1460c2ae-ce73-449a-a494-740e5ca44bb4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.071136735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cf6cefef-33a0-448c-b74e-a5367db38712 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.071274321Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cf6cefef-33a0-448c-b74e-a5367db38712 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.073081198Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e45e2bcc-c5b7-4ea9-b058-1c0e1d1e4288 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.073560775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867328073536730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e45e2bcc-c5b7-4ea9-b058-1c0e1d1e4288 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.073995254Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1381b8cb-6289-40cd-a977-275b6646a6c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.074075642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1381b8cb-6289-40cd-a977-275b6646a6c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.074311200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867101601325723,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958883518913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958850406855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79,PodSandboxId:0ac27cef2729c0df7d987404a63f883a30be782372032aa31fcbe8a3dfa300a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726866958825667527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268669
46745728301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726866946560910908,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb,PodSandboxId:2ced8553e30128303b22fe9ff0bf8119fefc9729ff510070c138297b2edfcbb3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726866936873742655,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a54da6e69705211cd5f4d070f63bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726866934980260113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360,PodSandboxId:2c27a0b835060dbb1899a4af4c05a4fd816335f9b3f9fac0a5d7248e6ba3cbab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726866934952064340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726866934988370531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674,PodSandboxId:db69670c3a315720cd48ed17cf71e606baf7c5d3d62a1ff75d4dd1559ab9da38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726866934917412851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1381b8cb-6289-40cd-a977-275b6646a6c1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.113346557Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=792e97ef-a8b4-497a-8bb4-cf958d4295c9 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.113523115Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=792e97ef-a8b4-497a-8bb4-cf958d4295c9 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.114770461Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b731169-a5bc-44d9-a95f-9c232d749ea7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.115232544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867328115207775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b731169-a5bc-44d9-a95f-9c232d749ea7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.115885413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54ef07dc-d619-4cd8-aadf-19072b60e9f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.115962515Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54ef07dc-d619-4cd8-aadf-19072b60e9f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:22:08 ha-797954 crio[666]: time="2024-09-20 21:22:08.116210185Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867101601325723,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958883518913,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726866958850406855,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79,PodSandboxId:0ac27cef2729c0df7d987404a63f883a30be782372032aa31fcbe8a3dfa300a8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNIN
G,CreatedAt:1726866958825667527,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:17268669
46745728301,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726866946560910908,Labels:map[string]strin
g{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb,PodSandboxId:2ced8553e30128303b22fe9ff0bf8119fefc9729ff510070c138297b2edfcbb3,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726866936873742655,Labels:map[string]string{
io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a54da6e69705211cd5f4d070f63bbb2,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726866934980260113,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kuber
netes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360,PodSandboxId:2c27a0b835060dbb1899a4af4c05a4fd816335f9b3f9fac0a5d7248e6ba3cbab,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726866934952064340,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.nam
e: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726866934988370531,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-sche
duler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674,PodSandboxId:db69670c3a315720cd48ed17cf71e606baf7c5d3d62a1ff75d4dd1559ab9da38,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726866934917412851,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.ku
bernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54ef07dc-d619-4cd8-aadf-19072b60e9f5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	468d1e1f7ac8a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   d6db457e8bbe8       busybox-7dff88458-44q29
	e3c9e6da67286       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   48e2a20451c46       coredns-7c65d6cfc9-djhfr
	0e272e92406da       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   891e1332d4c2d       coredns-7c65d6cfc9-bkcf8
	0fdd2277dc992       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   0ac27cef2729c       storage-provisioner
	d34d703bb77f7       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      6 minutes ago       Running             kindnet-cni               0                   794a246d61681       kindnet-6lk2f
	6367518948d4f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   457a98ab41d5f       kube-proxy-qx9ph
	371dadd621425       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   2ced8553e3012       kube-vip-ha-797954
	34e481bafb0d8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   0e7df03407c5d       kube-scheduler-ha-797954
	613c4c00096cd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   388cd6346f6a7       etcd-ha-797954
	3ec2d3bd57d8b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   2c27a0b835060       kube-controller-manager-ha-797954
	0d975d52ea292       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   db69670c3a315       kube-apiserver-ha-797954
	
	
	==> coredns [0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab] <==
	[INFO] 10.244.0.4:57168 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.029279393s
	[INFO] 10.244.1.2:56742 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000226618s
	[INFO] 10.244.1.2:46151 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000739523s
	[INFO] 10.244.2.2:36832 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000342649s
	[INFO] 10.244.0.4:48215 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142911s
	[INFO] 10.244.0.4:37051 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.021353644s
	[INFO] 10.244.0.4:56814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00025486s
	[INFO] 10.244.0.4:60619 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004011357s
	[INFO] 10.244.0.4:45781 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000219299s
	[INFO] 10.244.1.2:58895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003827s
	[INFO] 10.244.1.2:52799 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000348157s
	[INFO] 10.244.1.2:46521 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146401s
	[INFO] 10.244.1.2:58888 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000279379s
	[INFO] 10.244.2.2:45400 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149773s
	[INFO] 10.244.2.2:42656 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000284841s
	[INFO] 10.244.0.4:39831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168662s
	[INFO] 10.244.1.2:33336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145322s
	[INFO] 10.244.1.2:44953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000245865s
	[INFO] 10.244.2.2:58673 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183778s
	[INFO] 10.244.2.2:55978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198561s
	[INFO] 10.244.1.2:49222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209152s
	[INFO] 10.244.1.2:57989 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000183561s
	[INFO] 10.244.1.2:57201 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00017075s
	[INFO] 10.244.2.2:46144 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184056s
	[INFO] 10.244.2.2:53312 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152142s
	
	
	==> coredns [e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50] <==
	[INFO] 10.244.0.4:37614 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00011425s
	[INFO] 10.244.1.2:60075 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002047091s
	[INFO] 10.244.1.2:53774 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000137405s
	[INFO] 10.244.1.2:43436 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001572851s
	[INFO] 10.244.1.2:50084 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173888s
	[INFO] 10.244.2.2:46588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000281143s
	[INFO] 10.244.2.2:49861 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002195678s
	[INFO] 10.244.2.2:43162 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194152s
	[INFO] 10.244.2.2:38062 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149126s
	[INFO] 10.244.2.2:46161 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001726482s
	[INFO] 10.244.2.2:38358 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105606s
	[INFO] 10.244.0.4:34794 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107864s
	[INFO] 10.244.0.4:36461 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073572s
	[INFO] 10.244.0.4:59058 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059888s
	[INFO] 10.244.1.2:33254 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111604s
	[INFO] 10.244.1.2:55753 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094078s
	[INFO] 10.244.2.2:46481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128659s
	[INFO] 10.244.2.2:36242 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192291s
	[INFO] 10.244.0.4:45612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142384s
	[INFO] 10.244.0.4:32977 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000240882s
	[INFO] 10.244.0.4:36320 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157906s
	[INFO] 10.244.0.4:57209 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121261s
	[INFO] 10.244.1.2:52740 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116141s
	[INFO] 10.244.2.2:45858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145799s
	[INFO] 10.244.2.2:32912 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000224434s
	
	
	==> describe nodes <==
	Name:               ha-797954
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T21_15_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:15:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:21:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:18:45 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:18:45 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:18:45 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:18:45 +0000   Fri, 20 Sep 2024 21:15:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    ha-797954
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ee8e79db7754067acc26d8f79541cf4
	  System UUID:                1ee8e79d-b775-4067-acc2-6d8f79541cf4
	  Boot ID:                    2a01d700-0c79-4cac-934b-ff091b3f743b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-44q29              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 coredns-7c65d6cfc9-bkcf8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m22s
	  kube-system                 coredns-7c65d6cfc9-djhfr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m22s
	  kube-system                 etcd-ha-797954                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m27s
	  kube-system                 kindnet-6lk2f                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m23s
	  kube-system                 kube-apiserver-ha-797954             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-controller-manager-ha-797954    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m29s
	  kube-system                 kube-proxy-qx9ph                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-scheduler-ha-797954             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-vip-ha-797954                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m21s  kube-proxy       
	  Normal  Starting                 6m27s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m27s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m27s  kubelet          Node ha-797954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m27s  kubelet          Node ha-797954 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m27s  kubelet          Node ha-797954 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m23s  node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal  NodeReady                6m10s  kubelet          Node ha-797954 status is now: NodeReady
	  Normal  RegisteredNode           5m26s  node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal  RegisteredNode           4m5s   node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	
	
	Name:               ha-797954-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_16_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:16:34 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:19:27 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 21:18:37 +0000   Fri, 20 Sep 2024 21:20:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 21:18:37 +0000   Fri, 20 Sep 2024 21:20:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 21:18:37 +0000   Fri, 20 Sep 2024 21:20:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 21:18:37 +0000   Fri, 20 Sep 2024 21:20:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    ha-797954-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b99cf888073c415c8a60d9eed0a33a77
	  System UUID:                b99cf888-073c-415c-8a60-d9eed0a33a77
	  Boot ID:                    9ba91b19-4463-4b79-aaf1-951e8cebdd88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5m4df                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-797954-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m31s
	  kube-system                 kindnet-qh5rt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m34s
	  kube-system                 kube-apiserver-ha-797954-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-controller-manager-ha-797954-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-2pjxq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-scheduler-ha-797954-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m32s
	  kube-system                 kube-vip-ha-797954-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m30s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m34s (x8 over 5m34s)  kubelet          Node ha-797954-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s (x8 over 5m34s)  kubelet          Node ha-797954-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x7 over 5m34s)  kubelet          Node ha-797954-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m33s                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  NodeNotReady             2m                     node-controller  Node ha-797954-m02 status is now: NodeNotReady
	
	
	Name:               ha-797954-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_17_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:17:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:21:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:18:23 +0000   Fri, 20 Sep 2024 21:17:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:18:23 +0000   Fri, 20 Sep 2024 21:17:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:18:23 +0000   Fri, 20 Sep 2024 21:17:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:18:23 +0000   Fri, 20 Sep 2024 21:18:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    ha-797954-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 17e0c265db3c48ee8ba8b007e24689e7
	  System UUID:                17e0c265-db3c-48ee-8ba8-b007e24689e7
	  Boot ID:                    0d991bd8-dcf2-4a85-9be2-891290e9f74b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s4w7f                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-797954-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m13s
	  kube-system                 kindnet-tm8ml                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m15s
	  kube-system                 kube-apiserver-ha-797954-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-controller-manager-ha-797954-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-proxy-jskx5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-scheduler-ha-797954-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-vip-ha-797954-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m15s (x8 over 4m16s)  kubelet          Node ha-797954-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m15s (x8 over 4m16s)  kubelet          Node ha-797954-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m15s (x7 over 4m16s)  kubelet          Node ha-797954-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	  Normal  RegisteredNode           4m5s                   node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	
	
	Name:               ha-797954-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_18_56_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:18:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:22:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:19:26 +0000   Fri, 20 Sep 2024 21:18:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:19:26 +0000   Fri, 20 Sep 2024 21:18:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:19:26 +0000   Fri, 20 Sep 2024 21:18:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:19:26 +0000   Fri, 20 Sep 2024 21:19:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-797954-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 301819962cda48ca9fc543396f387646
	  System UUID:                30181996-2cda-48ca-9fc5-43396f387646
	  Boot ID:                    eb990af5-0bb6-4a23-a507-cf3ea2019e05
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5zw6l       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m12s
	  kube-system                 kube-proxy-ghvv9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m6s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m12s (x2 over 3m13s)  kubelet          Node ha-797954-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m12s (x2 over 3m13s)  kubelet          Node ha-797954-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m12s (x2 over 3m13s)  kubelet          Node ha-797954-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal  NodeReady                2m53s                  kubelet          Node ha-797954-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Sep20 21:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051526] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039875] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.837230] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.569523] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.588596] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.856812] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.057896] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062395] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.182824] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.137335] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.286470] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.136041] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +4.054578] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.065217] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.071912] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.088568] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.488875] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.290079] kauditd_printk_skb: 38 callbacks suppressed
	[Sep20 21:16] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e] <==
	{"level":"warn","ts":"2024-09-20T21:22:08.382351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.384626Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.391309Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.391514Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.396089Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.408252Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.414824Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.423806Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.427530Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.431170Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.438298Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.444449Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.449801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.453443Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.457334Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.462590Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.468092Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.474168Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.478641Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.482173Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.486366Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.491572Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.491676Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.500244Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:22:08.551296Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 21:22:08 up 7 min,  0 users,  load average: 0.24, 0.31, 0.17
	Linux ha-797954 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2] <==
	I0920 21:21:37.972737       1 main.go:299] handling current node
	I0920 21:21:47.971538       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:21:47.971596       1 main.go:299] handling current node
	I0920 21:21:47.971629       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:21:47.971638       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:21:47.971894       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:21:47.971922       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:21:47.971991       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:21:47.972013       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:21:57.976572       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:21:57.976762       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:21:57.976928       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:21:57.976953       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:21:57.977027       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:21:57.977047       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:21:57.977104       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:21:57.977123       1 main.go:299] handling current node
	I0920 21:22:07.980638       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:22:07.980697       1 main.go:299] handling current node
	I0920 21:22:07.980717       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:22:07.980723       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:22:07.980846       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:22:07.980868       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:22:07.980917       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:22:07.981000       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674] <==
	W0920 21:15:39.995416       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.240]
	I0920 21:15:39.997201       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 21:15:40.003324       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 21:15:40.250083       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 21:15:41.192926       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 21:15:41.217232       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0920 21:15:41.227871       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 21:15:45.151589       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0920 21:15:45.903425       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0920 21:18:22.997555       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52530: use of closed network connection
	E0920 21:18:23.183834       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52560: use of closed network connection
	E0920 21:18:23.372686       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52582: use of closed network connection
	E0920 21:18:23.592050       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:52590: use of closed network connection
	E0920 21:18:23.773849       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55264: use of closed network connection
	E0920 21:18:23.984978       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55286: use of closed network connection
	E0920 21:18:24.179836       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55308: use of closed network connection
	E0920 21:18:24.361809       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55330: use of closed network connection
	E0920 21:18:24.551154       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55360: use of closed network connection
	E0920 21:18:24.848743       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55398: use of closed network connection
	E0920 21:18:25.024085       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55414: use of closed network connection
	E0920 21:18:25.219294       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55430: use of closed network connection
	E0920 21:18:25.388448       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55444: use of closed network connection
	E0920 21:18:25.573028       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55472: use of closed network connection
	E0920 21:18:25.768039       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:55484: use of closed network connection
	W0920 21:19:49.996169       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.108 192.168.39.240]
	
	
	==> kube-controller-manager [3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360] <==
	I0920 21:18:56.149866       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-797954-m04\" does not exist"
	I0920 21:18:56.187860       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-797954-m04" podCIDRs=["10.244.3.0/24"]
	I0920 21:18:56.187915       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:56.187977       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:56.481322       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:56.872458       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:57.921726       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:58.169798       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:18:58.196622       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:00.119543       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:00.120039       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-797954-m04"
	I0920 21:19:00.151133       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:06.208726       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:15.212653       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:15.212832       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-797954-m04"
	I0920 21:19:15.244613       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:17.869384       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:19:26.358050       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:20:08.200630       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m02"
	I0920 21:20:08.200711       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-797954-m04"
	I0920 21:20:08.233657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m02"
	I0920 21:20:08.344363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="21.289751ms"
	I0920 21:20:08.345202       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="127.016µs"
	I0920 21:20:10.222320       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m02"
	I0920 21:20:13.505001       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m02"
	
	
	==> kube-proxy [6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 21:15:47.138996       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 21:15:47.173137       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.240"]
	E0920 21:15:47.173284       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 21:15:47.239418       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 21:15:47.244602       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 21:15:47.244668       1 server_linux.go:169] "Using iptables Proxier"
	I0920 21:15:47.323829       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 21:15:47.324671       1 server.go:483] "Version info" version="v1.31.1"
	I0920 21:15:47.324724       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:15:47.327319       1 config.go:199] "Starting service config controller"
	I0920 21:15:47.329757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 21:15:47.330178       1 config.go:105] "Starting endpoint slice config controller"
	I0920 21:15:47.330205       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 21:15:47.331254       1 config.go:328] "Starting node config controller"
	I0920 21:15:47.331284       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 21:15:47.430176       1 shared_informer.go:320] Caches are synced for service config
	I0920 21:15:47.430270       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 21:15:47.431643       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463] <==
	I0920 21:18:18.810601       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-5m4df" node="ha-797954-m02"
	E0920 21:18:18.861661       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-s4w7f\": pod busybox-7dff88458-s4w7f is already assigned to node \"ha-797954-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-s4w7f" node="ha-797954-m03"
	E0920 21:18:18.861738       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 5110f648-42f3-47e2-bc5a-4b33d1468080(default/busybox-7dff88458-s4w7f) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-s4w7f"
	E0920 21:18:18.861759       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-s4w7f\": pod busybox-7dff88458-s4w7f is already assigned to node \"ha-797954-m03\"" pod="default/busybox-7dff88458-s4w7f"
	I0920 21:18:18.861781       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-s4w7f" node="ha-797954-m03"
	E0920 21:18:18.864713       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-44q29\": pod busybox-7dff88458-44q29 is already assigned to node \"ha-797954\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-44q29" node="ha-797954"
	E0920 21:18:18.864788       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 443788ae-8329-43bd-819c-a80e37254cf3(default/busybox-7dff88458-44q29) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-44q29"
	E0920 21:18:18.864810       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-44q29\": pod busybox-7dff88458-44q29 is already assigned to node \"ha-797954\"" pod="default/busybox-7dff88458-44q29"
	I0920 21:18:18.864831       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-44q29" node="ha-797954"
	E0920 21:18:56.226900       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5zw6l\": pod kindnet-5zw6l is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5zw6l" node="ha-797954-m04"
	E0920 21:18:56.227110       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod a8b2d657-43d8-4e0d-8200-e11fb64ed35c(kube-system/kindnet-5zw6l) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5zw6l"
	E0920 21:18:56.227211       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5zw6l\": pod kindnet-5zw6l is already assigned to node \"ha-797954-m04\"" pod="kube-system/kindnet-5zw6l"
	I0920 21:18:56.227382       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5zw6l" node="ha-797954-m04"
	E0920 21:18:56.228063       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvv9\": pod kube-proxy-ghvv9 is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ghvv9" node="ha-797954-m04"
	E0920 21:18:56.234373       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 586765d9-0991-47c1-aa4f-7c82cb8a2e76(kube-system/kube-proxy-ghvv9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ghvv9"
	E0920 21:18:56.234534       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvv9\": pod kube-proxy-ghvv9 is already assigned to node \"ha-797954-m04\"" pod="kube-system/kube-proxy-ghvv9"
	I0920 21:18:56.234608       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ghvv9" node="ha-797954-m04"
	E0920 21:18:56.279871       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4kct8\": pod kindnet-4kct8 is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4kct8" node="ha-797954-m04"
	E0920 21:18:56.280008       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4kct8\": pod kindnet-4kct8 is already assigned to node \"ha-797954-m04\"" pod="kube-system/kindnet-4kct8"
	E0920 21:18:56.280247       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dpjmv\": pod kube-proxy-dpjmv is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dpjmv" node="ha-797954-m04"
	E0920 21:18:56.280445       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dpjmv\": pod kube-proxy-dpjmv is already assigned to node \"ha-797954-m04\"" pod="kube-system/kube-proxy-dpjmv"
	E0920 21:18:56.401044       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-r4kzr\": pod kindnet-r4kzr is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-r4kzr" node="ha-797954-m04"
	E0920 21:18:56.401344       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e53077ca-c9f3-433c-8f93-766336e4a9ab(kube-system/kindnet-r4kzr) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-r4kzr"
	E0920 21:18:56.401460       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-r4kzr\": pod kindnet-r4kzr is already assigned to node \"ha-797954-m04\"" pod="kube-system/kindnet-r4kzr"
	I0920 21:18:56.401669       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-r4kzr" node="ha-797954-m04"
	
	
	==> kubelet <==
	Sep 20 21:20:41 ha-797954 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:20:41 ha-797954 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:20:41 ha-797954 kubelet[1315]: E0920 21:20:41.280669    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867241280234372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:20:41 ha-797954 kubelet[1315]: E0920 21:20:41.280717    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867241280234372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:20:51 ha-797954 kubelet[1315]: E0920 21:20:51.282414    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867251282014880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:20:51 ha-797954 kubelet[1315]: E0920 21:20:51.282759    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867251282014880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:01 ha-797954 kubelet[1315]: E0920 21:21:01.284298    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867261283949905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:01 ha-797954 kubelet[1315]: E0920 21:21:01.284350    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867261283949905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:11 ha-797954 kubelet[1315]: E0920 21:21:11.286306    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867271285944996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:11 ha-797954 kubelet[1315]: E0920 21:21:11.286783    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867271285944996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:21 ha-797954 kubelet[1315]: E0920 21:21:21.289176    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867281288672833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:21 ha-797954 kubelet[1315]: E0920 21:21:21.289509    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867281288672833,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:31 ha-797954 kubelet[1315]: E0920 21:21:31.293560    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867291292385143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:31 ha-797954 kubelet[1315]: E0920 21:21:31.293612    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867291292385143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:41 ha-797954 kubelet[1315]: E0920 21:21:41.149114    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 21:21:41 ha-797954 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 21:21:41 ha-797954 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 21:21:41 ha-797954 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:21:41 ha-797954 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:21:41 ha-797954 kubelet[1315]: E0920 21:21:41.295407    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867301295116540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:41 ha-797954 kubelet[1315]: E0920 21:21:41.295545    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867301295116540,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:51 ha-797954 kubelet[1315]: E0920 21:21:51.297121    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867311296832439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:21:51 ha-797954 kubelet[1315]: E0920 21:21:51.297162    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867311296832439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:22:01 ha-797954 kubelet[1315]: E0920 21:22:01.299368    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867321299048597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:22:01 ha-797954 kubelet[1315]: E0920 21:22:01.299417    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867321299048597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-797954 -n ha-797954
helpers_test.go:261: (dbg) Run:  kubectl --context ha-797954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (368.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-797954 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-797954 -v=7 --alsologtostderr
E0920 21:24:11.809447   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-797954 -v=7 --alsologtostderr: exit status 82 (2m1.89026487s)

                                                
                                                
-- stdout --
	* Stopping node "ha-797954-m04"  ...
	* Stopping node "ha-797954-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:22:13.760557   35144 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:22:13.760682   35144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:22:13.760693   35144 out.go:358] Setting ErrFile to fd 2...
	I0920 21:22:13.760700   35144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:22:13.760880   35144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 21:22:13.761141   35144 out.go:352] Setting JSON to false
	I0920 21:22:13.761276   35144 mustload.go:65] Loading cluster: ha-797954
	I0920 21:22:13.761740   35144 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:22:13.761836   35144 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:22:13.762115   35144 mustload.go:65] Loading cluster: ha-797954
	I0920 21:22:13.762261   35144 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:22:13.762295   35144 stop.go:39] StopHost: ha-797954-m04
	I0920 21:22:13.762752   35144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:22:13.762801   35144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:22:13.778527   35144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38727
	I0920 21:22:13.779166   35144 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:22:13.779835   35144 main.go:141] libmachine: Using API Version  1
	I0920 21:22:13.779853   35144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:22:13.780248   35144 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:22:13.783074   35144 out.go:177] * Stopping node "ha-797954-m04"  ...
	I0920 21:22:13.784304   35144 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 21:22:13.784328   35144 main.go:141] libmachine: (ha-797954-m04) Calling .DriverName
	I0920 21:22:13.784587   35144 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 21:22:13.784631   35144 main.go:141] libmachine: (ha-797954-m04) Calling .GetSSHHostname
	I0920 21:22:13.787731   35144 main.go:141] libmachine: (ha-797954-m04) DBG | domain ha-797954-m04 has defined MAC address 52:54:00:c8:ec:4e in network mk-ha-797954
	I0920 21:22:13.788109   35144 main.go:141] libmachine: (ha-797954-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:ec:4e", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:18:41 +0000 UTC Type:0 Mac:52:54:00:c8:ec:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-797954-m04 Clientid:01:52:54:00:c8:ec:4e}
	I0920 21:22:13.788142   35144 main.go:141] libmachine: (ha-797954-m04) DBG | domain ha-797954-m04 has defined IP address 192.168.39.210 and MAC address 52:54:00:c8:ec:4e in network mk-ha-797954
	I0920 21:22:13.788321   35144 main.go:141] libmachine: (ha-797954-m04) Calling .GetSSHPort
	I0920 21:22:13.788491   35144 main.go:141] libmachine: (ha-797954-m04) Calling .GetSSHKeyPath
	I0920 21:22:13.788653   35144 main.go:141] libmachine: (ha-797954-m04) Calling .GetSSHUsername
	I0920 21:22:13.788817   35144 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m04/id_rsa Username:docker}
	I0920 21:22:13.880839   35144 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 21:22:13.936105   35144 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 21:22:13.991855   35144 main.go:141] libmachine: Stopping "ha-797954-m04"...
	I0920 21:22:13.991894   35144 main.go:141] libmachine: (ha-797954-m04) Calling .GetState
	I0920 21:22:13.993610   35144 main.go:141] libmachine: (ha-797954-m04) Calling .Stop
	I0920 21:22:13.997307   35144 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 0/120
	I0920 21:22:15.167269   35144 main.go:141] libmachine: (ha-797954-m04) Calling .GetState
	I0920 21:22:15.168535   35144 main.go:141] libmachine: Machine "ha-797954-m04" was stopped.
	I0920 21:22:15.168598   35144 stop.go:75] duration metric: took 1.384263652s to stop
	I0920 21:22:15.168636   35144 stop.go:39] StopHost: ha-797954-m03
	I0920 21:22:15.169075   35144 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:22:15.169126   35144 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:22:15.184657   35144 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43025
	I0920 21:22:15.185149   35144 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:22:15.185618   35144 main.go:141] libmachine: Using API Version  1
	I0920 21:22:15.185637   35144 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:22:15.185925   35144 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:22:15.188232   35144 out.go:177] * Stopping node "ha-797954-m03"  ...
	I0920 21:22:15.189376   35144 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 21:22:15.189401   35144 main.go:141] libmachine: (ha-797954-m03) Calling .DriverName
	I0920 21:22:15.189615   35144 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 21:22:15.189634   35144 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHHostname
	I0920 21:22:15.192393   35144 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:22:15.192891   35144 main.go:141] libmachine: (ha-797954-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:37:e1:1a", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:17:17 +0000 UTC Type:0 Mac:52:54:00:37:e1:1a Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:ha-797954-m03 Clientid:01:52:54:00:37:e1:1a}
	I0920 21:22:15.192916   35144 main.go:141] libmachine: (ha-797954-m03) DBG | domain ha-797954-m03 has defined IP address 192.168.39.108 and MAC address 52:54:00:37:e1:1a in network mk-ha-797954
	I0920 21:22:15.193093   35144 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHPort
	I0920 21:22:15.193232   35144 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHKeyPath
	I0920 21:22:15.193340   35144 main.go:141] libmachine: (ha-797954-m03) Calling .GetSSHUsername
	I0920 21:22:15.193450   35144 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m03/id_rsa Username:docker}
	I0920 21:22:15.285093   35144 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 21:22:15.339888   35144 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 21:22:15.395776   35144 main.go:141] libmachine: Stopping "ha-797954-m03"...
	I0920 21:22:15.395806   35144 main.go:141] libmachine: (ha-797954-m03) Calling .GetState
	I0920 21:22:15.397449   35144 main.go:141] libmachine: (ha-797954-m03) Calling .Stop
	I0920 21:22:15.401499   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 0/120
	I0920 21:22:16.403130   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 1/120
	I0920 21:22:17.404513   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 2/120
	I0920 21:22:18.405909   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 3/120
	I0920 21:22:19.407854   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 4/120
	I0920 21:22:20.410134   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 5/120
	I0920 21:22:21.411680   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 6/120
	I0920 21:22:22.414268   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 7/120
	I0920 21:22:23.415650   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 8/120
	I0920 21:22:24.417436   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 9/120
	I0920 21:22:25.418742   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 10/120
	I0920 21:22:26.420383   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 11/120
	I0920 21:22:27.422000   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 12/120
	I0920 21:22:28.423656   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 13/120
	I0920 21:22:29.425279   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 14/120
	I0920 21:22:30.427446   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 15/120
	I0920 21:22:31.429573   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 16/120
	I0920 21:22:32.430885   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 17/120
	I0920 21:22:33.432618   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 18/120
	I0920 21:22:34.434273   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 19/120
	I0920 21:22:35.436145   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 20/120
	I0920 21:22:36.437931   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 21/120
	I0920 21:22:37.439177   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 22/120
	I0920 21:22:38.440831   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 23/120
	I0920 21:22:39.442080   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 24/120
	I0920 21:22:40.443947   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 25/120
	I0920 21:22:41.445703   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 26/120
	I0920 21:22:42.447298   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 27/120
	I0920 21:22:43.449803   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 28/120
	I0920 21:22:44.451439   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 29/120
	I0920 21:22:45.453478   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 30/120
	I0920 21:22:46.454842   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 31/120
	I0920 21:22:47.456234   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 32/120
	I0920 21:22:48.457642   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 33/120
	I0920 21:22:49.458934   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 34/120
	I0920 21:22:50.461051   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 35/120
	I0920 21:22:51.462408   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 36/120
	I0920 21:22:52.463777   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 37/120
	I0920 21:22:53.465279   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 38/120
	I0920 21:22:54.466641   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 39/120
	I0920 21:22:55.468756   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 40/120
	I0920 21:22:56.470051   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 41/120
	I0920 21:22:57.471467   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 42/120
	I0920 21:22:58.472943   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 43/120
	I0920 21:22:59.474434   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 44/120
	I0920 21:23:00.476219   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 45/120
	I0920 21:23:01.477717   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 46/120
	I0920 21:23:02.479528   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 47/120
	I0920 21:23:03.481056   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 48/120
	I0920 21:23:04.482513   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 49/120
	I0920 21:23:05.484620   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 50/120
	I0920 21:23:06.486188   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 51/120
	I0920 21:23:07.487753   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 52/120
	I0920 21:23:08.489450   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 53/120
	I0920 21:23:09.491044   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 54/120
	I0920 21:23:10.493024   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 55/120
	I0920 21:23:11.494407   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 56/120
	I0920 21:23:12.496094   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 57/120
	I0920 21:23:13.498097   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 58/120
	I0920 21:23:14.499681   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 59/120
	I0920 21:23:15.501703   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 60/120
	I0920 21:23:16.503447   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 61/120
	I0920 21:23:17.505486   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 62/120
	I0920 21:23:18.507113   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 63/120
	I0920 21:23:19.508514   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 64/120
	I0920 21:23:20.510716   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 65/120
	I0920 21:23:21.511954   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 66/120
	I0920 21:23:22.513364   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 67/120
	I0920 21:23:23.514638   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 68/120
	I0920 21:23:24.516067   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 69/120
	I0920 21:23:25.517865   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 70/120
	I0920 21:23:26.519269   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 71/120
	I0920 21:23:27.520868   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 72/120
	I0920 21:23:28.522202   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 73/120
	I0920 21:23:29.523630   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 74/120
	I0920 21:23:30.525551   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 75/120
	I0920 21:23:31.526915   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 76/120
	I0920 21:23:32.528182   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 77/120
	I0920 21:23:33.529524   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 78/120
	I0920 21:23:34.531111   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 79/120
	I0920 21:23:35.533104   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 80/120
	I0920 21:23:36.534396   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 81/120
	I0920 21:23:37.535805   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 82/120
	I0920 21:23:38.537162   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 83/120
	I0920 21:23:39.538617   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 84/120
	I0920 21:23:40.540584   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 85/120
	I0920 21:23:41.542077   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 86/120
	I0920 21:23:42.543887   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 87/120
	I0920 21:23:43.545563   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 88/120
	I0920 21:23:44.547519   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 89/120
	I0920 21:23:45.549169   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 90/120
	I0920 21:23:46.551411   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 91/120
	I0920 21:23:47.552752   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 92/120
	I0920 21:23:48.554068   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 93/120
	I0920 21:23:49.555527   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 94/120
	I0920 21:23:50.557459   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 95/120
	I0920 21:23:51.559049   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 96/120
	I0920 21:23:52.560310   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 97/120
	I0920 21:23:53.561654   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 98/120
	I0920 21:23:54.563328   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 99/120
	I0920 21:23:55.565537   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 100/120
	I0920 21:23:56.567000   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 101/120
	I0920 21:23:57.568534   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 102/120
	I0920 21:23:58.569978   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 103/120
	I0920 21:23:59.571465   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 104/120
	I0920 21:24:00.573351   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 105/120
	I0920 21:24:01.574703   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 106/120
	I0920 21:24:02.576202   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 107/120
	I0920 21:24:03.577757   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 108/120
	I0920 21:24:04.579300   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 109/120
	I0920 21:24:05.580800   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 110/120
	I0920 21:24:06.582278   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 111/120
	I0920 21:24:07.583646   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 112/120
	I0920 21:24:08.585263   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 113/120
	I0920 21:24:09.586767   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 114/120
	I0920 21:24:10.588235   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 115/120
	I0920 21:24:11.589790   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 116/120
	I0920 21:24:12.591586   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 117/120
	I0920 21:24:13.592853   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 118/120
	I0920 21:24:14.594660   35144 main.go:141] libmachine: (ha-797954-m03) Waiting for machine to stop 119/120
	I0920 21:24:15.595761   35144 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 21:24:15.595824   35144 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 21:24:15.598215   35144 out.go:201] 
	W0920 21:24:15.599750   35144 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 21:24:15.599775   35144 out.go:270] * 
	* 
	W0920 21:24:15.602513   35144 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 21:24:15.603802   35144 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-797954 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-797954 --wait=true -v=7 --alsologtostderr
E0920 21:24:39.510112   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:25:09.138250   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-797954 --wait=true -v=7 --alsologtostderr: (4m3.902965717s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-797954
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-797954 -n ha-797954
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-797954 logs -n 25: (1.892197006s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m02:/home/docker/cp-test_ha-797954-m03_ha-797954-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m02 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m03_ha-797954-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04:/home/docker/cp-test_ha-797954-m03_ha-797954-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m04 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m03_ha-797954-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp testdata/cp-test.txt                                                | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3341505942/001/cp-test_ha-797954-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954:/home/docker/cp-test_ha-797954-m04_ha-797954.txt                       |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954 sudo cat                                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954.txt                                 |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m02:/home/docker/cp-test_ha-797954-m04_ha-797954-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m02 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03:/home/docker/cp-test_ha-797954-m04_ha-797954-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m03 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-797954 node stop m02 -v=7                                                     | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-797954 node start m02 -v=7                                                    | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-797954 -v=7                                                           | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-797954 -v=7                                                                | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-797954 --wait=true -v=7                                                    | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:24 UTC | 20 Sep 24 21:28 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-797954                                                                | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:28 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 21:24:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 21:24:15.649997   35620 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:24:15.650123   35620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:24:15.650132   35620 out.go:358] Setting ErrFile to fd 2...
	I0920 21:24:15.650137   35620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:24:15.650301   35620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 21:24:15.650821   35620 out.go:352] Setting JSON to false
	I0920 21:24:15.651748   35620 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4004,"bootTime":1726863452,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:24:15.651850   35620 start.go:139] virtualization: kvm guest
	I0920 21:24:15.654281   35620 out.go:177] * [ha-797954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 21:24:15.655546   35620 notify.go:220] Checking for updates...
	I0920 21:24:15.655550   35620 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:24:15.656922   35620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:24:15.658275   35620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:24:15.659432   35620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:24:15.660478   35620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:24:15.661534   35620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:24:15.663177   35620 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:24:15.663296   35620 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:24:15.663902   35620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:24:15.663951   35620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:24:15.680250   35620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36177
	I0920 21:24:15.680799   35620 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:24:15.681419   35620 main.go:141] libmachine: Using API Version  1
	I0920 21:24:15.681443   35620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:24:15.681786   35620 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:24:15.681994   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:24:15.718748   35620 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 21:24:15.720130   35620 start.go:297] selected driver: kvm2
	I0920 21:24:15.720154   35620 start.go:901] validating driver "kvm2" against &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:24:15.720291   35620 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:24:15.720614   35620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:24:15.720730   35620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 21:24:15.736442   35620 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 21:24:15.737152   35620 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 21:24:15.737181   35620 cni.go:84] Creating CNI manager for ""
	I0920 21:24:15.737236   35620 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 21:24:15.737291   35620 start.go:340] cluster config:
	{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:24:15.737429   35620 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:24:15.739501   35620 out.go:177] * Starting "ha-797954" primary control-plane node in "ha-797954" cluster
	I0920 21:24:15.740742   35620 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:24:15.740802   35620 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 21:24:15.740814   35620 cache.go:56] Caching tarball of preloaded images
	I0920 21:24:15.740880   35620 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 21:24:15.740890   35620 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 21:24:15.741034   35620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:24:15.741289   35620 start.go:360] acquireMachinesLock for ha-797954: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 21:24:15.741357   35620 start.go:364] duration metric: took 37.229µs to acquireMachinesLock for "ha-797954"
	I0920 21:24:15.741377   35620 start.go:96] Skipping create...Using existing machine configuration
	I0920 21:24:15.741386   35620 fix.go:54] fixHost starting: 
	I0920 21:24:15.741671   35620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:24:15.741707   35620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:24:15.756520   35620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0920 21:24:15.757002   35620 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:24:15.757461   35620 main.go:141] libmachine: Using API Version  1
	I0920 21:24:15.757480   35620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:24:15.757773   35620 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:24:15.757950   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:24:15.758095   35620 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:24:15.759677   35620 fix.go:112] recreateIfNeeded on ha-797954: state=Running err=<nil>
	W0920 21:24:15.759694   35620 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 21:24:15.762367   35620 out.go:177] * Updating the running kvm2 "ha-797954" VM ...
	I0920 21:24:15.763919   35620 machine.go:93] provisionDockerMachine start ...
	I0920 21:24:15.763948   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:24:15.764182   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:24:15.766802   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:15.767220   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:15.767240   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:15.767421   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:24:15.767591   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:15.767717   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:15.767822   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:24:15.767993   35620 main.go:141] libmachine: Using SSH client type: native
	I0920 21:24:15.768216   35620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:24:15.768228   35620 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 21:24:15.880433   35620 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-797954
	
	I0920 21:24:15.880461   35620 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:24:15.880702   35620 buildroot.go:166] provisioning hostname "ha-797954"
	I0920 21:24:15.880732   35620 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:24:15.880936   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:24:15.884767   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:15.885429   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:15.885459   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:15.885900   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:24:15.886193   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:15.886416   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:15.886682   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:24:15.886889   35620 main.go:141] libmachine: Using SSH client type: native
	I0920 21:24:15.887085   35620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:24:15.887098   35620 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-797954 && echo "ha-797954" | sudo tee /etc/hostname
	I0920 21:24:16.012054   35620 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-797954
	
	I0920 21:24:16.012081   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:24:16.014890   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.015229   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:16.015253   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.015475   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:24:16.015629   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:16.015791   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:16.015924   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:24:16.016045   35620 main.go:141] libmachine: Using SSH client type: native
	I0920 21:24:16.016198   35620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:24:16.016221   35620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-797954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-797954/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-797954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 21:24:16.128043   35620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:24:16.128077   35620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 21:24:16.128117   35620 buildroot.go:174] setting up certificates
	I0920 21:24:16.128133   35620 provision.go:84] configureAuth start
	I0920 21:24:16.128150   35620 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:24:16.128489   35620 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:24:16.131012   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.131406   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:16.131441   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.131599   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:24:16.133722   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.134074   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:16.134097   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.134223   35620 provision.go:143] copyHostCerts
	I0920 21:24:16.134259   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:24:16.134289   35620 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 21:24:16.134305   35620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:24:16.134375   35620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 21:24:16.134442   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:24:16.134459   35620 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 21:24:16.134465   35620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:24:16.134489   35620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 21:24:16.134527   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:24:16.134542   35620 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 21:24:16.134550   35620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:24:16.134571   35620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 21:24:16.134613   35620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.ha-797954 san=[127.0.0.1 192.168.39.240 ha-797954 localhost minikube]
	I0920 21:24:16.651963   35620 provision.go:177] copyRemoteCerts
	I0920 21:24:16.652032   35620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 21:24:16.652056   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:24:16.655308   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.655743   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:16.655770   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.655998   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:24:16.656183   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:16.656342   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:24:16.656459   35620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:24:16.741854   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 21:24:16.741930   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 21:24:16.769896   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 21:24:16.769984   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 21:24:16.797560   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 21:24:16.797641   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 21:24:16.825493   35620 provision.go:87] duration metric: took 697.34474ms to configureAuth
	I0920 21:24:16.825525   35620 buildroot.go:189] setting minikube options for container-runtime
	I0920 21:24:16.825772   35620 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:24:16.825875   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:24:16.828713   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.829075   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:16.829097   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.829310   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:24:16.829521   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:16.829671   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:16.829790   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:24:16.829973   35620 main.go:141] libmachine: Using SSH client type: native
	I0920 21:24:16.830150   35620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:24:16.830165   35620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 21:25:47.606550   35620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 21:25:47.606585   35620 machine.go:96] duration metric: took 1m31.842645407s to provisionDockerMachine
	I0920 21:25:47.606601   35620 start.go:293] postStartSetup for "ha-797954" (driver="kvm2")
	I0920 21:25:47.606614   35620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 21:25:47.606634   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:25:47.606928   35620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 21:25:47.606954   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:25:47.610404   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.610814   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:47.610840   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.611019   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:25:47.611211   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:25:47.611390   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:25:47.611500   35620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:25:47.699109   35620 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 21:25:47.704073   35620 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 21:25:47.704095   35620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 21:25:47.704156   35620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 21:25:47.704260   35620 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 21:25:47.704279   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /etc/ssl/certs/167852.pem
	I0920 21:25:47.704365   35620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 21:25:47.714823   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:25:47.740542   35620 start.go:296] duration metric: took 133.929154ms for postStartSetup
	I0920 21:25:47.740580   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:25:47.740893   35620 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0920 21:25:47.740919   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:25:47.743695   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.744153   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:47.744192   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.744320   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:25:47.744490   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:25:47.744633   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:25:47.744745   35620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	W0920 21:25:47.830525   35620 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0920 21:25:47.830557   35620 fix.go:56] duration metric: took 1m32.089169854s for fixHost
	I0920 21:25:47.830583   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:25:47.833572   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.834040   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:47.834066   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.834292   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:25:47.834525   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:25:47.834759   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:25:47.834982   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:25:47.835170   35620 main.go:141] libmachine: Using SSH client type: native
	I0920 21:25:47.835375   35620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:25:47.835386   35620 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 21:25:47.943954   35620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726867547.898946556
	
	I0920 21:25:47.943982   35620 fix.go:216] guest clock: 1726867547.898946556
	I0920 21:25:47.943990   35620 fix.go:229] Guest: 2024-09-20 21:25:47.898946556 +0000 UTC Remote: 2024-09-20 21:25:47.830565171 +0000 UTC m=+92.216677937 (delta=68.381385ms)
	I0920 21:25:47.944028   35620 fix.go:200] guest clock delta is within tolerance: 68.381385ms
	I0920 21:25:47.944039   35620 start.go:83] releasing machines lock for "ha-797954", held for 1m32.202669546s
	I0920 21:25:47.944064   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:25:47.944319   35620 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:25:47.947293   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.947673   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:47.947702   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.947863   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:25:47.948458   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:25:47.948626   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:25:47.948696   35620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 21:25:47.948744   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:25:47.948837   35620 ssh_runner.go:195] Run: cat /version.json
	I0920 21:25:47.948871   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:25:47.951364   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.951607   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.951823   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:47.951847   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.952030   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:25:47.952066   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:47.952090   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.952221   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:25:47.952297   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:25:47.952376   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:25:47.952448   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:25:47.952513   35620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:25:47.952569   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:25:47.952720   35620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:25:48.032466   35620 ssh_runner.go:195] Run: systemctl --version
	I0920 21:25:48.052519   35620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 21:25:48.215416   35620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 21:25:48.222105   35620 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 21:25:48.222184   35620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 21:25:48.231627   35620 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 21:25:48.231653   35620 start.go:495] detecting cgroup driver to use...
	I0920 21:25:48.231724   35620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 21:25:48.247981   35620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 21:25:48.262652   35620 docker.go:217] disabling cri-docker service (if available) ...
	I0920 21:25:48.262724   35620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 21:25:48.276789   35620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 21:25:48.290640   35620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 21:25:48.443842   35620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 21:25:48.596783   35620 docker.go:233] disabling docker service ...
	I0920 21:25:48.596871   35620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 21:25:48.614368   35620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 21:25:48.630802   35620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 21:25:48.782741   35620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 21:25:48.932608   35620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 21:25:48.946785   35620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 21:25:48.966742   35620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 21:25:48.966803   35620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:48.977971   35620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 21:25:48.978038   35620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:48.989131   35620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:48.999976   35620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:49.011730   35620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 21:25:49.022492   35620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:49.034804   35620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:49.046484   35620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:49.057266   35620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 21:25:49.067541   35620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 21:25:49.077076   35620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:25:49.219760   35620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 21:25:51.603405   35620 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.383607239s)
	I0920 21:25:51.603435   35620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 21:25:51.603481   35620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 21:25:51.608503   35620 start.go:563] Will wait 60s for crictl version
	I0920 21:25:51.608556   35620 ssh_runner.go:195] Run: which crictl
	I0920 21:25:51.612692   35620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 21:25:51.652796   35620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 21:25:51.652866   35620 ssh_runner.go:195] Run: crio --version
	I0920 21:25:51.682397   35620 ssh_runner.go:195] Run: crio --version
	I0920 21:25:51.713410   35620 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 21:25:51.714486   35620 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:25:51.717401   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:51.717772   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:51.717796   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:51.718081   35620 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 21:25:51.722824   35620 kubeadm.go:883] updating cluster {Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 21:25:51.723005   35620 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:25:51.723054   35620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:25:51.765779   35620 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 21:25:51.765803   35620 crio.go:433] Images already preloaded, skipping extraction
	I0920 21:25:51.765847   35620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:25:51.800197   35620 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 21:25:51.800220   35620 cache_images.go:84] Images are preloaded, skipping loading
	I0920 21:25:51.800229   35620 kubeadm.go:934] updating node { 192.168.39.240 8443 v1.31.1 crio true true} ...
	I0920 21:25:51.800346   35620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-797954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 21:25:51.800429   35620 ssh_runner.go:195] Run: crio config
	I0920 21:25:51.857184   35620 cni.go:84] Creating CNI manager for ""
	I0920 21:25:51.857208   35620 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 21:25:51.857220   35620 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 21:25:51.857240   35620 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-797954 NodeName:ha-797954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 21:25:51.857381   35620 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-797954"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 21:25:51.857400   35620 kube-vip.go:115] generating kube-vip config ...
	I0920 21:25:51.857438   35620 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 21:25:51.869355   35620 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 21:25:51.869487   35620 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 21:25:51.869556   35620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 21:25:51.879747   35620 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 21:25:51.879809   35620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 21:25:51.889590   35620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 21:25:51.907377   35620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 21:25:51.925001   35620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 21:25:51.942169   35620 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 21:25:51.961886   35620 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 21:25:51.965834   35620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:25:52.120260   35620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:25:52.140180   35620 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954 for IP: 192.168.39.240
	I0920 21:25:52.140203   35620 certs.go:194] generating shared ca certs ...
	I0920 21:25:52.140219   35620 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:25:52.140414   35620 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 21:25:52.140465   35620 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 21:25:52.140475   35620 certs.go:256] generating profile certs ...
	I0920 21:25:52.140553   35620 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key
	I0920 21:25:52.140578   35620 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9df47489
	I0920 21:25:52.140590   35620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9df47489 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.63 192.168.39.108 192.168.39.254]
	I0920 21:25:52.340380   35620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9df47489 ...
	I0920 21:25:52.340411   35620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9df47489: {Name:mka8fc944fe2dbf9c2642397fdc5913effc5ae98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:25:52.340598   35620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9df47489 ...
	I0920 21:25:52.340613   35620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9df47489: {Name:mk782cfcf686f75aaf2a141659862084d03c9680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:25:52.340708   35620 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9df47489 -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt
	I0920 21:25:52.340930   35620 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9df47489 -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key
	I0920 21:25:52.341071   35620 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key
	I0920 21:25:52.341086   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 21:25:52.341098   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 21:25:52.341109   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 21:25:52.341122   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 21:25:52.341135   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 21:25:52.341146   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 21:25:52.341161   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 21:25:52.341173   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 21:25:52.341221   35620 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 21:25:52.341261   35620 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 21:25:52.341270   35620 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 21:25:52.341289   35620 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 21:25:52.341314   35620 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 21:25:52.341340   35620 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 21:25:52.341379   35620 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:25:52.341430   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem -> /usr/share/ca-certificates/16785.pem
	I0920 21:25:52.341443   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /usr/share/ca-certificates/167852.pem
	I0920 21:25:52.341453   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:25:52.341982   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 21:25:52.368056   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 21:25:52.393457   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 21:25:52.419144   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 21:25:52.445333   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 21:25:52.470934   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 21:25:52.497933   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 21:25:52.522974   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 21:25:52.547756   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 21:25:52.572376   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 21:25:52.597981   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 21:25:52.623093   35620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 21:25:52.640555   35620 ssh_runner.go:195] Run: openssl version
	I0920 21:25:52.646929   35620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 21:25:52.658241   35620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 21:25:52.662706   35620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:25:52.662758   35620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 21:25:52.668333   35620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 21:25:52.677996   35620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 21:25:52.688886   35620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:25:52.693503   35620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:25:52.693552   35620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:25:52.699383   35620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 21:25:52.709505   35620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 21:25:52.720468   35620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 21:25:52.725056   35620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:25:52.725108   35620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 21:25:52.730838   35620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 21:25:52.740207   35620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:25:52.744843   35620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 21:25:52.750439   35620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 21:25:52.756421   35620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 21:25:52.762115   35620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 21:25:52.767749   35620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 21:25:52.773331   35620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 21:25:52.778989   35620 kubeadm.go:392] StartCluster: {Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:25:52.779124   35620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 21:25:52.779164   35620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 21:25:52.816983   35620 cri.go:89] found id: "a959d781d8bf0adb294844a603047f84c05f68a635b7d1871e3bf12e35241a92"
	I0920 21:25:52.817004   35620 cri.go:89] found id: "91b817752376881be1a3e8e15845575f07c5809aa959117fb187c3e302af6c64"
	I0920 21:25:52.817008   35620 cri.go:89] found id: "4fef15b7cfc35ef22719ef72675914a19a60d5438876eb0bd4d9418e7b9dbf3a"
	I0920 21:25:52.817011   35620 cri.go:89] found id: "e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50"
	I0920 21:25:52.817014   35620 cri.go:89] found id: "0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab"
	I0920 21:25:52.817016   35620 cri.go:89] found id: "0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79"
	I0920 21:25:52.817019   35620 cri.go:89] found id: "d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2"
	I0920 21:25:52.817027   35620 cri.go:89] found id: "6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f"
	I0920 21:25:52.817030   35620 cri.go:89] found id: "371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb"
	I0920 21:25:52.817035   35620 cri.go:89] found id: "34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463"
	I0920 21:25:52.817037   35620 cri.go:89] found id: "613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e"
	I0920 21:25:52.817042   35620 cri.go:89] found id: "3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360"
	I0920 21:25:52.817044   35620 cri.go:89] found id: "0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674"
	I0920 21:25:52.817047   35620 cri.go:89] found id: ""
	I0920 21:25:52.817085   35620 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.230665058Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867700230447860,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64d7b61c-f6d4-42bb-bf9b-603806f209ea name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.231335536Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c930097d-e314-4c74-b739-7964644b822c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.231419748Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c930097d-e314-4c74-b739-7964644b822c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.231927164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01d94fc81760189b0e8d6d650f1194c21d9c83c12dd57eaf8bd4552d8da97206,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726867636138693324,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63327c191bb5e5a2d1a93611fcf6ae00e68dbd669c5c13f30880ddd4120905d5,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726867603139709373,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9acf7608b1c89f78f8a30b337bbab3b8983f18408bff7726791dd7fea2c966dd,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726867599143105635,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb1af3ebd34a5c7f6e2f5387efdf51ea65b5ff12fda14bd42cf361db7412fee9,PodSandboxId:73c1f3b73fe01fd8e86744bf2f11cdcee5d08652b15989b0ce428deb091f10ed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867592721231801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf0f3f29d025cc94ed6c93e86a3feda794f2ea39af18279b0bb506f5f56c83a,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726867584141366388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a005f08f242ab1da69f85d6f27698aa173d4d173877618b4a6fab5659c3e75,PodSandboxId:5bc2ff4eaee7a6682e5a28894c7b01b5fc64fc88372077da070d4ac36ea39b9f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726867573690681571,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f66e1b3049ba1249b5133e70b8cda4c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4945a06bdca1b2b38dab867c97225dae17f62ecd3304af5f57cb805cbf890ccd,PodSandboxId:718361b08fad6d8bab9962fd41ff8096345e56718e09e37876dc0222045b5f16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726867559417252842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:c5ae7c3c8fc8a02e9ccbadaccb0e916ad225a3d05c746d2c55e00cf8fb447e78,PodSandboxId:9b769b2f2ea7cbb30f89fcb9e3ee2fcbf29e87de6def73062892e2eff50855fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726867559377741103,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c08b5b87
2427f1a5a24f544b9f33bfeb68f7ab9f51e9dbcb0bc2e8a49f3f0fd2,PodSandboxId:18724a1cd47b218a632de6b9fda123a5b1698d9f06e6af1e32d22f56de3bfa25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867559104785697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2b308172eed90bdd95877e35af5cedf81ab84ef5922e67df30c5085ea7fee3,PodSandboxId:a091191b9d39f86ec084bdf98669934e8d91c1aebfda4c7521e0b15240bcd1dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726867559033958824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d38acbb961f53cbc2e1941f21bfada4d86ed3343c926314f7f73e63dad87760,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726867558966601319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6084f1904ab2d42dd56300e192a8932b203f4274d48b0ea823a48acc3ea3419,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726867558880935737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d34f2b26411d6ad84d8e5b7e9a72a27925db5d7eb2207539073fc7cafe36005,PodSandboxId:bee79a999f9f07a583c52f819764d1aa6d6f0d979e3a7f55f67f5065d82e70f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726867558859384851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79815991ab9ec9c10f0200369572d07476dca30893372af11ac070df4e2a877c,PodSandboxId:bf170f47ff010b86aff8ab363182805a8210b59d4ee2289abb166ea50cfa351f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867553596407180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726867101601398164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958883599803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958850709309,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726866946745800384,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726866946560939000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726866934980318430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726866934988572235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c930097d-e314-4c74-b739-7964644b822c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.292213830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=eb8e653e-804e-400f-acce-ea2a31e64ade name=/runtime.v1.RuntimeService/Version
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.292343981Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=eb8e653e-804e-400f-acce-ea2a31e64ade name=/runtime.v1.RuntimeService/Version
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.293860529Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c87fd38-27d9-4bad-876e-21f4983b6145 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.294579775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867700294547121,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c87fd38-27d9-4bad-876e-21f4983b6145 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.295643524Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a28a06d-a6d0-4bee-8dcc-c1128ee1b9e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.296097492Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a28a06d-a6d0-4bee-8dcc-c1128ee1b9e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.297423894Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01d94fc81760189b0e8d6d650f1194c21d9c83c12dd57eaf8bd4552d8da97206,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726867636138693324,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63327c191bb5e5a2d1a93611fcf6ae00e68dbd669c5c13f30880ddd4120905d5,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726867603139709373,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9acf7608b1c89f78f8a30b337bbab3b8983f18408bff7726791dd7fea2c966dd,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726867599143105635,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb1af3ebd34a5c7f6e2f5387efdf51ea65b5ff12fda14bd42cf361db7412fee9,PodSandboxId:73c1f3b73fe01fd8e86744bf2f11cdcee5d08652b15989b0ce428deb091f10ed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867592721231801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf0f3f29d025cc94ed6c93e86a3feda794f2ea39af18279b0bb506f5f56c83a,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726867584141366388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a005f08f242ab1da69f85d6f27698aa173d4d173877618b4a6fab5659c3e75,PodSandboxId:5bc2ff4eaee7a6682e5a28894c7b01b5fc64fc88372077da070d4ac36ea39b9f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726867573690681571,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f66e1b3049ba1249b5133e70b8cda4c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4945a06bdca1b2b38dab867c97225dae17f62ecd3304af5f57cb805cbf890ccd,PodSandboxId:718361b08fad6d8bab9962fd41ff8096345e56718e09e37876dc0222045b5f16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726867559417252842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:c5ae7c3c8fc8a02e9ccbadaccb0e916ad225a3d05c746d2c55e00cf8fb447e78,PodSandboxId:9b769b2f2ea7cbb30f89fcb9e3ee2fcbf29e87de6def73062892e2eff50855fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726867559377741103,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c08b5b87
2427f1a5a24f544b9f33bfeb68f7ab9f51e9dbcb0bc2e8a49f3f0fd2,PodSandboxId:18724a1cd47b218a632de6b9fda123a5b1698d9f06e6af1e32d22f56de3bfa25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867559104785697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2b308172eed90bdd95877e35af5cedf81ab84ef5922e67df30c5085ea7fee3,PodSandboxId:a091191b9d39f86ec084bdf98669934e8d91c1aebfda4c7521e0b15240bcd1dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726867559033958824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d38acbb961f53cbc2e1941f21bfada4d86ed3343c926314f7f73e63dad87760,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726867558966601319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6084f1904ab2d42dd56300e192a8932b203f4274d48b0ea823a48acc3ea3419,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726867558880935737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d34f2b26411d6ad84d8e5b7e9a72a27925db5d7eb2207539073fc7cafe36005,PodSandboxId:bee79a999f9f07a583c52f819764d1aa6d6f0d979e3a7f55f67f5065d82e70f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726867558859384851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79815991ab9ec9c10f0200369572d07476dca30893372af11ac070df4e2a877c,PodSandboxId:bf170f47ff010b86aff8ab363182805a8210b59d4ee2289abb166ea50cfa351f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867553596407180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726867101601398164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958883599803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958850709309,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726866946745800384,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726866946560939000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726866934980318430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726866934988572235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a28a06d-a6d0-4bee-8dcc-c1128ee1b9e4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.344826783Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73261a24-9dbe-471a-8946-f0dd2aa0ab8f name=/runtime.v1.RuntimeService/Version
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.344932533Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73261a24-9dbe-471a-8946-f0dd2aa0ab8f name=/runtime.v1.RuntimeService/Version
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.346179011Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=70ea2851-b8df-4229-8b23-7da23287ecc8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.346747732Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867700346708536,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70ea2851-b8df-4229-8b23-7da23287ecc8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.347288962Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5354f2f3-60d8-4c7c-b6c8-a84415b8d4e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.347369689Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5354f2f3-60d8-4c7c-b6c8-a84415b8d4e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.347873763Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01d94fc81760189b0e8d6d650f1194c21d9c83c12dd57eaf8bd4552d8da97206,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726867636138693324,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63327c191bb5e5a2d1a93611fcf6ae00e68dbd669c5c13f30880ddd4120905d5,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726867603139709373,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9acf7608b1c89f78f8a30b337bbab3b8983f18408bff7726791dd7fea2c966dd,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726867599143105635,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb1af3ebd34a5c7f6e2f5387efdf51ea65b5ff12fda14bd42cf361db7412fee9,PodSandboxId:73c1f3b73fe01fd8e86744bf2f11cdcee5d08652b15989b0ce428deb091f10ed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867592721231801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf0f3f29d025cc94ed6c93e86a3feda794f2ea39af18279b0bb506f5f56c83a,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726867584141366388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a005f08f242ab1da69f85d6f27698aa173d4d173877618b4a6fab5659c3e75,PodSandboxId:5bc2ff4eaee7a6682e5a28894c7b01b5fc64fc88372077da070d4ac36ea39b9f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726867573690681571,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f66e1b3049ba1249b5133e70b8cda4c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4945a06bdca1b2b38dab867c97225dae17f62ecd3304af5f57cb805cbf890ccd,PodSandboxId:718361b08fad6d8bab9962fd41ff8096345e56718e09e37876dc0222045b5f16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726867559417252842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:c5ae7c3c8fc8a02e9ccbadaccb0e916ad225a3d05c746d2c55e00cf8fb447e78,PodSandboxId:9b769b2f2ea7cbb30f89fcb9e3ee2fcbf29e87de6def73062892e2eff50855fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726867559377741103,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c08b5b87
2427f1a5a24f544b9f33bfeb68f7ab9f51e9dbcb0bc2e8a49f3f0fd2,PodSandboxId:18724a1cd47b218a632de6b9fda123a5b1698d9f06e6af1e32d22f56de3bfa25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867559104785697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2b308172eed90bdd95877e35af5cedf81ab84ef5922e67df30c5085ea7fee3,PodSandboxId:a091191b9d39f86ec084bdf98669934e8d91c1aebfda4c7521e0b15240bcd1dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726867559033958824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d38acbb961f53cbc2e1941f21bfada4d86ed3343c926314f7f73e63dad87760,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726867558966601319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6084f1904ab2d42dd56300e192a8932b203f4274d48b0ea823a48acc3ea3419,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726867558880935737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d34f2b26411d6ad84d8e5b7e9a72a27925db5d7eb2207539073fc7cafe36005,PodSandboxId:bee79a999f9f07a583c52f819764d1aa6d6f0d979e3a7f55f67f5065d82e70f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726867558859384851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79815991ab9ec9c10f0200369572d07476dca30893372af11ac070df4e2a877c,PodSandboxId:bf170f47ff010b86aff8ab363182805a8210b59d4ee2289abb166ea50cfa351f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867553596407180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726867101601398164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958883599803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958850709309,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726866946745800384,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726866946560939000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726866934980318430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726866934988572235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5354f2f3-60d8-4c7c-b6c8-a84415b8d4e1 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.395538621Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2d6883b-c0cb-48a4-806f-52becf8067c2 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.395617146Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2d6883b-c0cb-48a4-806f-52becf8067c2 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.397226175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1995dbd3-14b5-40fd-9ba9-dc6c71e42cee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.397736439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867700397706820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1995dbd3-14b5-40fd-9ba9-dc6c71e42cee name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.398816697Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1bb26229-5e5f-49b1-81ff-c0e7f3e1099f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.398877348Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1bb26229-5e5f-49b1-81ff-c0e7f3e1099f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:28:20 ha-797954 crio[3600]: time="2024-09-20 21:28:20.399273019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01d94fc81760189b0e8d6d650f1194c21d9c83c12dd57eaf8bd4552d8da97206,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726867636138693324,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63327c191bb5e5a2d1a93611fcf6ae00e68dbd669c5c13f30880ddd4120905d5,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726867603139709373,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9acf7608b1c89f78f8a30b337bbab3b8983f18408bff7726791dd7fea2c966dd,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726867599143105635,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb1af3ebd34a5c7f6e2f5387efdf51ea65b5ff12fda14bd42cf361db7412fee9,PodSandboxId:73c1f3b73fe01fd8e86744bf2f11cdcee5d08652b15989b0ce428deb091f10ed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867592721231801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf0f3f29d025cc94ed6c93e86a3feda794f2ea39af18279b0bb506f5f56c83a,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726867584141366388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a005f08f242ab1da69f85d6f27698aa173d4d173877618b4a6fab5659c3e75,PodSandboxId:5bc2ff4eaee7a6682e5a28894c7b01b5fc64fc88372077da070d4ac36ea39b9f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726867573690681571,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f66e1b3049ba1249b5133e70b8cda4c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4945a06bdca1b2b38dab867c97225dae17f62ecd3304af5f57cb805cbf890ccd,PodSandboxId:718361b08fad6d8bab9962fd41ff8096345e56718e09e37876dc0222045b5f16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726867559417252842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:c5ae7c3c8fc8a02e9ccbadaccb0e916ad225a3d05c746d2c55e00cf8fb447e78,PodSandboxId:9b769b2f2ea7cbb30f89fcb9e3ee2fcbf29e87de6def73062892e2eff50855fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726867559377741103,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c08b5b87
2427f1a5a24f544b9f33bfeb68f7ab9f51e9dbcb0bc2e8a49f3f0fd2,PodSandboxId:18724a1cd47b218a632de6b9fda123a5b1698d9f06e6af1e32d22f56de3bfa25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867559104785697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2b308172eed90bdd95877e35af5cedf81ab84ef5922e67df30c5085ea7fee3,PodSandboxId:a091191b9d39f86ec084bdf98669934e8d91c1aebfda4c7521e0b15240bcd1dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726867559033958824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d38acbb961f53cbc2e1941f21bfada4d86ed3343c926314f7f73e63dad87760,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726867558966601319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6084f1904ab2d42dd56300e192a8932b203f4274d48b0ea823a48acc3ea3419,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726867558880935737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d34f2b26411d6ad84d8e5b7e9a72a27925db5d7eb2207539073fc7cafe36005,PodSandboxId:bee79a999f9f07a583c52f819764d1aa6d6f0d979e3a7f55f67f5065d82e70f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726867558859384851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79815991ab9ec9c10f0200369572d07476dca30893372af11ac070df4e2a877c,PodSandboxId:bf170f47ff010b86aff8ab363182805a8210b59d4ee2289abb166ea50cfa351f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867553596407180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726867101601398164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958883599803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958850709309,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726866946745800384,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726866946560939000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726866934980318430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726866934988572235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1bb26229-5e5f-49b1-81ff-c0e7f3e1099f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	01d94fc817601       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   15438a46df747       storage-provisioner
	63327c191bb5e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   2                   45584b0106ac0       kube-controller-manager-ha-797954
	9acf7608b1c89       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            3                   ee726ff249d4f       kube-apiserver-ha-797954
	cb1af3ebd34a5       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   73c1f3b73fe01       busybox-7dff88458-44q29
	cdf0f3f29d025       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   15438a46df747       storage-provisioner
	24a005f08f242       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   5bc2ff4eaee7a       kube-vip-ha-797954
	4945a06bdca1b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      2 minutes ago        Running             kube-proxy                1                   718361b08fad6       kube-proxy-qx9ph
	c5ae7c3c8fc8a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      2 minutes ago        Running             kindnet-cni               1                   9b769b2f2ea7c       kindnet-6lk2f
	c08b5b872427f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   18724a1cd47b2       coredns-7c65d6cfc9-djhfr
	bd2b308172eed       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      2 minutes ago        Running             etcd                      1                   a091191b9d39f       etcd-ha-797954
	0d38acbb961f5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      2 minutes ago        Exited              kube-controller-manager   1                   45584b0106ac0       kube-controller-manager-ha-797954
	b6084f1904ab2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      2 minutes ago        Exited              kube-apiserver            2                   ee726ff249d4f       kube-apiserver-ha-797954
	8d34f2b26411d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      2 minutes ago        Running             kube-scheduler            1                   bee79a999f9f0       kube-scheduler-ha-797954
	79815991ab9ec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      2 minutes ago        Running             coredns                   1                   bf170f47ff010       coredns-7c65d6cfc9-bkcf8
	468d1e1f7ac8a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago        Exited              busybox                   0                   d6db457e8bbe8       busybox-7dff88458-44q29
	e3c9e6da67286       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   48e2a20451c46       coredns-7c65d6cfc9-djhfr
	0e272e92406da       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      12 minutes ago       Exited              coredns                   0                   891e1332d4c2d       coredns-7c65d6cfc9-bkcf8
	d34d703bb77f7       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      12 minutes ago       Exited              kindnet-cni               0                   794a246d61681       kindnet-6lk2f
	6367518948d4f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      12 minutes ago       Exited              kube-proxy                0                   457a98ab41d5f       kube-proxy-qx9ph
	34e481bafb0d8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      12 minutes ago       Exited              kube-scheduler            0                   0e7df03407c5d       kube-scheduler-ha-797954
	613c4c00096cd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      12 minutes ago       Exited              etcd                      0                   388cd6346f6a7       etcd-ha-797954
	
	
	==> coredns [0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab] <==
	[INFO] 10.244.0.4:37051 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.021353644s
	[INFO] 10.244.0.4:56814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00025486s
	[INFO] 10.244.0.4:60619 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004011357s
	[INFO] 10.244.0.4:45781 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000219299s
	[INFO] 10.244.1.2:58895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003827s
	[INFO] 10.244.1.2:52799 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000348157s
	[INFO] 10.244.1.2:46521 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146401s
	[INFO] 10.244.1.2:58888 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000279379s
	[INFO] 10.244.2.2:45400 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149773s
	[INFO] 10.244.2.2:42656 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000284841s
	[INFO] 10.244.0.4:39831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168662s
	[INFO] 10.244.1.2:33336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145322s
	[INFO] 10.244.1.2:44953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000245865s
	[INFO] 10.244.2.2:58673 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183778s
	[INFO] 10.244.2.2:55978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198561s
	[INFO] 10.244.1.2:49222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209152s
	[INFO] 10.244.1.2:57989 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000183561s
	[INFO] 10.244.1.2:57201 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00017075s
	[INFO] 10.244.2.2:46144 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184056s
	[INFO] 10.244.2.2:53312 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152142s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1720&timeout=5m5s&timeoutSeconds=305&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1726&timeout=8m52s&timeoutSeconds=532&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1760&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [79815991ab9ec9c10f0200369572d07476dca30893372af11ac070df4e2a877c] <==
	[INFO] plugin/kubernetes: Trace[1058542172]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 21:26:00.738) (total time: 10001ms):
	Trace[1058542172]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:26:10.740)
	Trace[1058542172]: [10.001950418s] [10.001950418s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1665169565]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 21:26:00.868) (total time: 10001ms):
	Trace[1665169565]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:26:10.870)
	Trace[1665169565]: [10.001779074s] [10.001779074s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:47780->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:47780->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42526->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42526->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c08b5b872427f1a5a24f544b9f33bfeb68f7ab9f51e9dbcb0bc2e8a49f3f0fd2] <==
	[INFO] plugin/kubernetes: Trace[320100368]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 21:26:10.700) (total time: 11692ms):
	Trace[320100368]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40994->10.96.0.1:443: read: connection reset by peer 11691ms (21:26:22.392)
	Trace[320100368]: [11.692140338s] [11.692140338s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40994->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40984->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[770252672]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 21:26:10.577) (total time: 11815ms):
	Trace[770252672]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40984->10.96.0.1:443: read: connection reset by peer 11814ms (21:26:22.392)
	Trace[770252672]: [11.815798267s] [11.815798267s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40984->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50] <==
	[INFO] 10.244.1.2:50084 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173888s
	[INFO] 10.244.2.2:46588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000281143s
	[INFO] 10.244.2.2:49861 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002195678s
	[INFO] 10.244.2.2:43162 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194152s
	[INFO] 10.244.2.2:38062 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149126s
	[INFO] 10.244.2.2:46161 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001726482s
	[INFO] 10.244.2.2:38358 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105606s
	[INFO] 10.244.0.4:34794 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107864s
	[INFO] 10.244.0.4:36461 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073572s
	[INFO] 10.244.0.4:59058 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059888s
	[INFO] 10.244.1.2:33254 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111604s
	[INFO] 10.244.1.2:55753 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094078s
	[INFO] 10.244.2.2:46481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128659s
	[INFO] 10.244.2.2:36242 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192291s
	[INFO] 10.244.0.4:45612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142384s
	[INFO] 10.244.0.4:32977 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000240882s
	[INFO] 10.244.0.4:36320 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157906s
	[INFO] 10.244.0.4:57209 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121261s
	[INFO] 10.244.1.2:52740 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116141s
	[INFO] 10.244.2.2:45858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145799s
	[INFO] 10.244.2.2:32912 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000224434s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1726&timeout=9m49s&timeoutSeconds=589&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1726&timeout=7m5s&timeoutSeconds=425&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> describe nodes <==
	Name:               ha-797954
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T21_15_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:15:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:28:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:26:42 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:26:42 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:26:42 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:26:42 +0000   Fri, 20 Sep 2024 21:15:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    ha-797954
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ee8e79db7754067acc26d8f79541cf4
	  System UUID:                1ee8e79d-b775-4067-acc2-6d8f79541cf4
	  Boot ID:                    2a01d700-0c79-4cac-934b-ff091b3f743b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-44q29              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-bkcf8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 coredns-7c65d6cfc9-djhfr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     12m
	  kube-system                 etcd-ha-797954                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         12m
	  kube-system                 kindnet-6lk2f                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-apiserver-ha-797954             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-797954    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-qx9ph                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-797954             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-797954                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 99s                    kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node ha-797954 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node ha-797954 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node ha-797954 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-797954 status is now: NodeReady
	  Normal   RegisteredNode           11m                    node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Warning  ContainerGCFailed        2m39s (x2 over 3m39s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             2m27s (x3 over 3m17s)  kubelet          Node ha-797954 status is now: NodeNotReady
	  Normal   RegisteredNode           101s                   node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal   RegisteredNode           95s                    node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal   RegisteredNode           36s                    node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	
	
	Name:               ha-797954-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_16_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:16:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:28:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:27:24 +0000   Fri, 20 Sep 2024 21:26:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:27:24 +0000   Fri, 20 Sep 2024 21:26:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:27:24 +0000   Fri, 20 Sep 2024 21:26:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:27:24 +0000   Fri, 20 Sep 2024 21:26:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    ha-797954-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b99cf888073c415c8a60d9eed0a33a77
	  System UUID:                b99cf888-073c-415c-8a60-d9eed0a33a77
	  Boot ID:                    35be719f-c336-4c56-98ca-55749fdadbd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5m4df                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-797954-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         11m
	  kube-system                 kindnet-qh5rt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      11m
	  kube-system                 kube-apiserver-ha-797954-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-797954-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-2pjxq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-797954-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-797954-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 91s                  kube-proxy       
	  Normal  Starting                 11m                  kube-proxy       
	  Normal  NodeAllocatableEnforced  11m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)    kubelet          Node ha-797954-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)    kubelet          Node ha-797954-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x7 over 11m)    kubelet          Node ha-797954-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           11m                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  NodeNotReady             8m12s                node-controller  Node ha-797954-m02 status is now: NodeNotReady
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node ha-797954-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node ha-797954-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x7 over 2m5s)  kubelet          Node ha-797954-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           101s                 node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           95s                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           36s                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	
	
	Name:               ha-797954-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_17_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:17:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:28:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:27:59 +0000   Fri, 20 Sep 2024 21:27:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:27:59 +0000   Fri, 20 Sep 2024 21:27:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:27:59 +0000   Fri, 20 Sep 2024 21:27:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:27:59 +0000   Fri, 20 Sep 2024 21:27:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.108
	  Hostname:    ha-797954-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 17e0c265db3c48ee8ba8b007e24689e7
	  System UUID:                17e0c265-db3c-48ee-8ba8-b007e24689e7
	  Boot ID:                    0f349598-810b-4510-9a99-3d6d2d51bd79
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-s4w7f                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-797954-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-tm8ml                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-ha-797954-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-797954-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-jskx5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-797954-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-797954-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 36s                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-797954-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-797954-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-797954-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	  Normal   RegisteredNode           101s               node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	  Normal   RegisteredNode           95s                node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	  Normal   NodeNotReady             61s                node-controller  Node ha-797954-m03 status is now: NodeNotReady
	  Normal   Starting                 52s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  52s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  52s (x2 over 52s)  kubelet          Node ha-797954-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    52s (x2 over 52s)  kubelet          Node ha-797954-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     52s (x2 over 52s)  kubelet          Node ha-797954-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 52s                kubelet          Node ha-797954-m03 has been rebooted, boot id: 0f349598-810b-4510-9a99-3d6d2d51bd79
	  Normal   NodeReady                52s                kubelet          Node ha-797954-m03 status is now: NodeReady
	  Normal   RegisteredNode           36s                node-controller  Node ha-797954-m03 event: Registered Node ha-797954-m03 in Controller
	
	
	Name:               ha-797954-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_18_56_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:18:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:28:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:28:12 +0000   Fri, 20 Sep 2024 21:28:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:28:12 +0000   Fri, 20 Sep 2024 21:28:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:28:12 +0000   Fri, 20 Sep 2024 21:28:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:28:12 +0000   Fri, 20 Sep 2024 21:28:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-797954-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 301819962cda48ca9fc543396f387646
	  System UUID:                30181996-2cda-48ca-9fc5-43396f387646
	  Boot ID:                    517f7625-9dbd-4fa7-9ee7-6452f1cfd966
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5zw6l       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m24s
	  kube-system                 kube-proxy-ghvv9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4s                     kube-proxy       
	  Normal   Starting                 9m19s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m24s (x2 over 9m25s)  kubelet          Node ha-797954-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m24s (x2 over 9m25s)  kubelet          Node ha-797954-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m24s (x2 over 9m25s)  kubelet          Node ha-797954-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m23s                  node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal   RegisteredNode           9m22s                  node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal   RegisteredNode           9m20s                  node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal   NodeReady                9m5s                   kubelet          Node ha-797954-m04 status is now: NodeReady
	  Normal   RegisteredNode           101s                   node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal   RegisteredNode           95s                    node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal   NodeNotReady             61s                    node-controller  Node ha-797954-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           36s                    node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal   Starting                 8s                     kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 8s (x3 over 8s)        kubelet          Node ha-797954-m04 has been rebooted, boot id: 517f7625-9dbd-4fa7-9ee7-6452f1cfd966
	  Normal   NodeHasSufficientMemory  8s (x4 over 8s)        kubelet          Node ha-797954-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x4 over 8s)        kubelet          Node ha-797954-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x4 over 8s)        kubelet          Node ha-797954-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             8s                     kubelet          Node ha-797954-m04 status is now: NodeNotReady
	  Normal   NodeReady                8s (x2 over 8s)        kubelet          Node ha-797954-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.856812] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.057896] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062395] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.182824] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.137335] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.286470] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.136041] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +4.054578] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.065217] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.071912] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.088568] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.488875] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.290079] kauditd_printk_skb: 38 callbacks suppressed
	[Sep20 21:16] kauditd_printk_skb: 26 callbacks suppressed
	[Sep20 21:25] systemd-fstab-generator[3525]: Ignoring "noauto" option for root device
	[  +0.161736] systemd-fstab-generator[3537]: Ignoring "noauto" option for root device
	[  +0.184396] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	[  +0.151239] systemd-fstab-generator[3563]: Ignoring "noauto" option for root device
	[  +0.285569] systemd-fstab-generator[3591]: Ignoring "noauto" option for root device
	[  +2.898817] systemd-fstab-generator[3686]: Ignoring "noauto" option for root device
	[  +1.562216] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.093490] kauditd_printk_skb: 1 callbacks suppressed
	[Sep20 21:26] kauditd_printk_skb: 87 callbacks suppressed
	[ +39.173141] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e] <==
	2024/09/20 21:24:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 21:24:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-20T21:24:17.037225Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.240:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T21:24:17.037384Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.240:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T21:24:17.037596Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1cdefa49b8abbef9","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-20T21:24:17.037762Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.037805Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.037831Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.037894Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.037990Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.038061Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.038075Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.038084Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.038092Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.038131Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.038195Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.038240Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.038275Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.038303Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.042595Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"warn","ts":"2024-09-20T21:24:17.042688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.29968663s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-20T21:24:17.042812Z","caller":"traceutil/trace.go:171","msg":"trace[364636347] range","detail":"{range_begin:; range_end:; }","duration":"9.299824074s","start":"2024-09-20T21:24:07.742973Z","end":"2024-09-20T21:24:17.042797Z","steps":["trace[364636347] 'agreement among raft nodes before linearized reading'  (duration: 9.299683567s)"],"step_count":1}
	{"level":"error","ts":"2024-09-20T21:24:17.042851Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-20T21:24:17.043640Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2024-09-20T21:24:17.043660Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-797954","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.240:2380"],"advertise-client-urls":["https://192.168.39.240:2379"]}
	
	
	==> etcd [bd2b308172eed90bdd95877e35af5cedf81ab84ef5922e67df30c5085ea7fee3] <==
	{"level":"warn","ts":"2024-09-20T21:27:23.549856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:27:23.551632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:27:23.601078Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:27:23.701129Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:27:23.801538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:27:23.827027Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"1cdefa49b8abbef9","from":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-09-20T21:27:24.762836Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.108:2380/version","remote-member-id":"582d2db74829ae92","error":"Get \"https://192.168.39.108:2380/version\": dial tcp 192.168.39.108:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T21:27:24.762898Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"582d2db74829ae92","error":"Get \"https://192.168.39.108:2380/version\": dial tcp 192.168.39.108:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T21:27:25.158181Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"582d2db74829ae92","rtt":"0s","error":"dial tcp 192.168.39.108:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T21:27:25.160521Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"582d2db74829ae92","rtt":"0s","error":"dial tcp 192.168.39.108:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T21:27:28.765286Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.108:2380/version","remote-member-id":"582d2db74829ae92","error":"Get \"https://192.168.39.108:2380/version\": dial tcp 192.168.39.108:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T21:27:28.766632Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"582d2db74829ae92","error":"Get \"https://192.168.39.108:2380/version\": dial tcp 192.168.39.108:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T21:27:30.159514Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"582d2db74829ae92","rtt":"0s","error":"dial tcp 192.168.39.108:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T21:27:30.160798Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"582d2db74829ae92","rtt":"0s","error":"dial tcp 192.168.39.108:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T21:27:32.769164Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.108:2380/version","remote-member-id":"582d2db74829ae92","error":"Get \"https://192.168.39.108:2380/version\": dial tcp 192.168.39.108:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T21:27:32.769671Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"582d2db74829ae92","error":"Get \"https://192.168.39.108:2380/version\": dial tcp 192.168.39.108:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T21:27:35.160207Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"582d2db74829ae92","rtt":"0s","error":"dial tcp 192.168.39.108:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-20T21:27:35.161385Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"582d2db74829ae92","rtt":"0s","error":"dial tcp 192.168.39.108:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-20T21:27:36.030282Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:27:36.030428Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:27:36.030935Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:27:36.055918Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1cdefa49b8abbef9","to":"582d2db74829ae92","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-20T21:27:36.055978Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:27:36.057429Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1cdefa49b8abbef9","to":"582d2db74829ae92","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-20T21:27:36.057453Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	
	
	==> kernel <==
	 21:28:21 up 13 min,  0 users,  load average: 0.26, 0.72, 0.48
	Linux ha-797954 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c5ae7c3c8fc8a02e9ccbadaccb0e916ad225a3d05c746d2c55e00cf8fb447e78] <==
	I0920 21:27:50.540008       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:28:00.535842       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:28:00.535953       1 main.go:299] handling current node
	I0920 21:28:00.536004       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:28:00.536026       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:28:00.536163       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:28:00.536208       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:28:00.536276       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:28:00.536298       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:28:10.543613       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:28:10.543675       1 main.go:299] handling current node
	I0920 21:28:10.543714       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:28:10.543726       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:28:10.543838       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:28:10.543862       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:28:10.543937       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:28:10.543959       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:28:20.539612       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:28:20.539663       1 main.go:299] handling current node
	I0920 21:28:20.539680       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:28:20.539685       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:28:20.539839       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:28:20.539845       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:28:20.539900       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:28:20.539905       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2] <==
	I0920 21:23:37.972785       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:23:47.970974       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:23:47.971078       1 main.go:299] handling current node
	I0920 21:23:47.971162       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:23:47.971186       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:23:47.971335       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:23:47.971358       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:23:47.971442       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:23:47.971573       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:23:57.973347       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:23:57.973397       1 main.go:299] handling current node
	I0920 21:23:57.973410       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:23:57.973416       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:23:57.973658       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:23:57.973700       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:23:57.973811       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:23:57.973833       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:24:07.971140       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:24:07.971258       1 main.go:299] handling current node
	I0920 21:24:07.971301       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:24:07.971320       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:24:07.971541       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:24:07.971574       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:24:07.971647       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:24:07.971666       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9acf7608b1c89f78f8a30b337bbab3b8983f18408bff7726791dd7fea2c966dd] <==
	I0920 21:26:41.055243       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0920 21:26:41.157159       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 21:26:41.169439       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 21:26:41.184808       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 21:26:41.188009       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 21:26:41.188024       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 21:26:41.188154       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 21:26:41.188239       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 21:26:41.188263       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 21:26:41.188323       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 21:26:41.188357       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 21:26:41.192800       1 aggregator.go:171] initial CRD sync complete...
	I0920 21:26:41.192885       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 21:26:41.192894       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 21:26:41.192901       1 cache.go:39] Caches are synced for autoregister controller
	I0920 21:26:41.198428       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 21:26:41.198448       1 policy_source.go:224] refreshing policies
	I0920 21:26:41.229308       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0920 21:26:41.230202       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.108 192.168.39.63]
	I0920 21:26:41.232865       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 21:26:41.253454       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0920 21:26:41.265893       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0920 21:26:42.060879       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0920 21:26:42.783985       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.108 192.168.39.240 192.168.39.63]
	W0920 21:26:52.793149       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.240 192.168.39.63]
	
	
	==> kube-apiserver [b6084f1904ab2d42dd56300e192a8932b203f4274d48b0ea823a48acc3ea3419] <==
	I0920 21:25:59.429382       1 options.go:228] external host was not specified, using 192.168.39.240
	I0920 21:25:59.433715       1 server.go:142] Version: v1.31.1
	I0920 21:25:59.433762       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:25:59.995671       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0920 21:26:00.045756       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 21:26:00.056940       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0920 21:26:00.056977       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0920 21:26:00.057329       1 instance.go:232] Using reconciler: lease
	W0920 21:26:19.988446       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0920 21:26:19.990953       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0920 21:26:20.058873       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0920 21:26:20.058996       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [0d38acbb961f53cbc2e1941f21bfada4d86ed3343c926314f7f73e63dad87760] <==
	I0920 21:25:59.905886       1 serving.go:386] Generated self-signed cert in-memory
	I0920 21:26:00.614025       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0920 21:26:00.614169       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:26:00.615828       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0920 21:26:00.616599       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 21:26:00.616857       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 21:26:00.616967       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0920 21:26:21.067690       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.240:8443/healthz\": dial tcp 192.168.39.240:8443: connect: connection refused"
	
	
	==> kube-controller-manager [63327c191bb5e5a2d1a93611fcf6ae00e68dbd669c5c13f30880ddd4120905d5] <==
	I0920 21:27:19.254682       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-797954-m04"
	I0920 21:27:19.254913       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m03"
	I0920 21:27:19.258196       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:27:19.287376       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:27:19.296606       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m03"
	I0920 21:27:19.501652       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="16.915603ms"
	I0920 21:27:19.502702       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.076µs"
	I0920 21:27:20.665919       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m03"
	I0920 21:27:24.472773       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m02"
	I0920 21:27:24.599925       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m03"
	I0920 21:27:28.788857       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m03"
	I0920 21:27:28.801555       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m03"
	I0920 21:27:29.528917       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m03"
	I0920 21:27:29.751948       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.819µs"
	I0920 21:27:30.754760       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:27:34.684241       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:27:44.750311       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:27:44.847327       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:27:46.904833       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="10.617957ms"
	I0920 21:27:46.905078       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="120.624µs"
	I0920 21:27:59.483008       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m03"
	I0920 21:28:12.539625       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-797954-m04"
	I0920 21:28:12.539887       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:28:12.558335       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:28:14.556045       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	
	
	==> kube-proxy [4945a06bdca1b2b38dab867c97225dae17f62ecd3304af5f57cb805cbf890ccd] <==
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 21:26:01.272076       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-797954\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 21:26:04.344667       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-797954\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 21:26:07.417378       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-797954\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 21:26:13.561561       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-797954\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 21:26:22.776771       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-797954\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 21:26:41.207962       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-797954\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0920 21:26:41.208065       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0920 21:26:41.208162       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 21:26:41.327737       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 21:26:41.327803       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 21:26:41.327839       1 server_linux.go:169] "Using iptables Proxier"
	I0920 21:26:41.331141       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 21:26:41.331757       1 server.go:483] "Version info" version="v1.31.1"
	I0920 21:26:41.331962       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:26:41.337045       1 config.go:199] "Starting service config controller"
	I0920 21:26:41.337129       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 21:26:41.337199       1 config.go:105] "Starting endpoint slice config controller"
	I0920 21:26:41.337308       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 21:26:41.341128       1 config.go:328] "Starting node config controller"
	I0920 21:26:41.341188       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 21:26:41.438243       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 21:26:41.438412       1 shared_informer.go:320] Caches are synced for service config
	I0920 21:26:41.441433       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f] <==
	E0920 21:23:06.172135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:09.242104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:09.242492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:09.242637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:09.242695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:09.242853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:09.242930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:15.386427       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:15.386568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:15.386856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:15.387025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:15.387987       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:15.388191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:24.602178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:24.602250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:27.672440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:27.672590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:27.672735       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:27.672755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:49.176094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:49.176214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:49.176390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:49.178433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:52.248893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:52.249256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463] <==
	E0920 21:18:56.228063       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvv9\": pod kube-proxy-ghvv9 is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ghvv9" node="ha-797954-m04"
	E0920 21:18:56.234373       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 586765d9-0991-47c1-aa4f-7c82cb8a2e76(kube-system/kube-proxy-ghvv9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ghvv9"
	E0920 21:18:56.234534       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvv9\": pod kube-proxy-ghvv9 is already assigned to node \"ha-797954-m04\"" pod="kube-system/kube-proxy-ghvv9"
	I0920 21:18:56.234608       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ghvv9" node="ha-797954-m04"
	E0920 21:18:56.279871       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4kct8\": pod kindnet-4kct8 is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4kct8" node="ha-797954-m04"
	E0920 21:18:56.280008       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4kct8\": pod kindnet-4kct8 is already assigned to node \"ha-797954-m04\"" pod="kube-system/kindnet-4kct8"
	E0920 21:18:56.280247       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dpjmv\": pod kube-proxy-dpjmv is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dpjmv" node="ha-797954-m04"
	E0920 21:18:56.280445       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dpjmv\": pod kube-proxy-dpjmv is already assigned to node \"ha-797954-m04\"" pod="kube-system/kube-proxy-dpjmv"
	E0920 21:18:56.401044       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-r4kzr\": pod kindnet-r4kzr is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-r4kzr" node="ha-797954-m04"
	E0920 21:18:56.401344       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e53077ca-c9f3-433c-8f93-766336e4a9ab(kube-system/kindnet-r4kzr) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-r4kzr"
	E0920 21:18:56.401460       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-r4kzr\": pod kindnet-r4kzr is already assigned to node \"ha-797954-m04\"" pod="kube-system/kindnet-r4kzr"
	I0920 21:18:56.401669       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-r4kzr" node="ha-797954-m04"
	E0920 21:24:08.140549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0920 21:24:08.634304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0920 21:24:09.111123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0920 21:24:09.759332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0920 21:24:11.657436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0920 21:24:11.960214       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0920 21:24:12.261833       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0920 21:24:13.719425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0920 21:24:13.802712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0920 21:24:13.844534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0920 21:24:14.171986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0920 21:24:15.531125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0920 21:24:16.940022       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8d34f2b26411d6ad84d8e5b7e9a72a27925db5d7eb2207539073fc7cafe36005] <==
	W0920 21:26:36.435035       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.240:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:36.435151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.39.240:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:37.169381       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:37.169538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.240:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:37.465392       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.240:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:37.465598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.240:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:37.525008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.240:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:37.525106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.240:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:37.917370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.240:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:37.917574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.240:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:38.416159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.240:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:38.416231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.240:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:38.423124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:38.423206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.240:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:38.869594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.240:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:38.869666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.240:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:39.190993       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.240:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:39.191055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.240:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:41.090431       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 21:26:41.090541       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 21:26:41.091142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 21:26:41.091278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 21:26:41.092671       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 21:26:41.092714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0920 21:27:03.880120       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 21:27:11 ha-797954 kubelet[1315]: E0920 21:27:11.369876    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867631369389316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:27:11 ha-797954 kubelet[1315]: E0920 21:27:11.369922    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867631369389316,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:27:16 ha-797954 kubelet[1315]: I0920 21:27:16.128746    1315 scope.go:117] "RemoveContainer" containerID="cdf0f3f29d025cc94ed6c93e86a3feda794f2ea39af18279b0bb506f5f56c83a"
	Sep 20 21:27:21 ha-797954 kubelet[1315]: E0920 21:27:21.372964    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867641372305135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:27:21 ha-797954 kubelet[1315]: E0920 21:27:21.373004    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867641372305135,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:27:26 ha-797954 kubelet[1315]: I0920 21:27:26.128543    1315 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-797954" podUID="ad002c23-a891-4bf6-bbb7-bba95f43b1d0"
	Sep 20 21:27:26 ha-797954 kubelet[1315]: I0920 21:27:26.147397    1315 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-797954"
	Sep 20 21:27:27 ha-797954 kubelet[1315]: I0920 21:27:27.643954    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-44q29" podStartSLOduration=547.486931456 podStartE2EDuration="9m9.643913736s" podCreationTimestamp="2024-09-20 21:18:18 +0000 UTC" firstStartedPulling="2024-09-20 21:18:19.423609831 +0000 UTC m=+158.447209983" lastFinishedPulling="2024-09-20 21:18:21.58059211 +0000 UTC m=+160.604192263" observedRunningTime="2024-09-20 21:18:21.930585424 +0000 UTC m=+160.954185584" watchObservedRunningTime="2024-09-20 21:27:27.643913736 +0000 UTC m=+706.667513891"
	Sep 20 21:27:31 ha-797954 kubelet[1315]: E0920 21:27:31.375980    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867651375424728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:27:31 ha-797954 kubelet[1315]: E0920 21:27:31.376306    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867651375424728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:27:41 ha-797954 kubelet[1315]: E0920 21:27:41.154059    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 21:27:41 ha-797954 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 21:27:41 ha-797954 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 21:27:41 ha-797954 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:27:41 ha-797954 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:27:41 ha-797954 kubelet[1315]: E0920 21:27:41.379116    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867661378526202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:27:41 ha-797954 kubelet[1315]: E0920 21:27:41.379144    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867661378526202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:27:51 ha-797954 kubelet[1315]: E0920 21:27:51.381134    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867671380404419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:27:51 ha-797954 kubelet[1315]: E0920 21:27:51.381239    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867671380404419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:28:01 ha-797954 kubelet[1315]: E0920 21:28:01.387981    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867681382994835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:28:01 ha-797954 kubelet[1315]: E0920 21:28:01.388033    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867681382994835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:28:11 ha-797954 kubelet[1315]: E0920 21:28:11.390215    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867691389708904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:28:11 ha-797954 kubelet[1315]: E0920 21:28:11.390249    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867691389708904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:28:21 ha-797954 kubelet[1315]: E0920 21:28:21.392735    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867701392364913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:28:21 ha-797954 kubelet[1315]: E0920 21:28:21.392778    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867701392364913,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 21:28:19.896517   36956 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19672-9614/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-797954 -n ha-797954
helpers_test.go:261: (dbg) Run:  kubectl --context ha-797954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (368.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 stop -v=7 --alsologtostderr
E0920 21:29:11.809014   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:30:09.132446   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-797954 stop -v=7 --alsologtostderr: exit status 82 (2m0.473422881s)

                                                
                                                
-- stdout --
	* Stopping node "ha-797954-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:28:39.592914   37396 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:28:39.593212   37396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:28:39.593222   37396 out.go:358] Setting ErrFile to fd 2...
	I0920 21:28:39.593228   37396 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:28:39.593430   37396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 21:28:39.593700   37396 out.go:352] Setting JSON to false
	I0920 21:28:39.593805   37396 mustload.go:65] Loading cluster: ha-797954
	I0920 21:28:39.594253   37396 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:28:39.594366   37396 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:28:39.594557   37396 mustload.go:65] Loading cluster: ha-797954
	I0920 21:28:39.594711   37396 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:28:39.594755   37396 stop.go:39] StopHost: ha-797954-m04
	I0920 21:28:39.595239   37396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:28:39.595287   37396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:28:39.611716   37396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42737
	I0920 21:28:39.612325   37396 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:28:39.612980   37396 main.go:141] libmachine: Using API Version  1
	I0920 21:28:39.613016   37396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:28:39.613320   37396 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:28:39.615511   37396 out.go:177] * Stopping node "ha-797954-m04"  ...
	I0920 21:28:39.616786   37396 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 21:28:39.616815   37396 main.go:141] libmachine: (ha-797954-m04) Calling .DriverName
	I0920 21:28:39.617047   37396 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 21:28:39.617084   37396 main.go:141] libmachine: (ha-797954-m04) Calling .GetSSHHostname
	I0920 21:28:39.620213   37396 main.go:141] libmachine: (ha-797954-m04) DBG | domain ha-797954-m04 has defined MAC address 52:54:00:c8:ec:4e in network mk-ha-797954
	I0920 21:28:39.620621   37396 main.go:141] libmachine: (ha-797954-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:ec:4e", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:28:06 +0000 UTC Type:0 Mac:52:54:00:c8:ec:4e Iaid: IPaddr:192.168.39.210 Prefix:24 Hostname:ha-797954-m04 Clientid:01:52:54:00:c8:ec:4e}
	I0920 21:28:39.620659   37396 main.go:141] libmachine: (ha-797954-m04) DBG | domain ha-797954-m04 has defined IP address 192.168.39.210 and MAC address 52:54:00:c8:ec:4e in network mk-ha-797954
	I0920 21:28:39.620794   37396 main.go:141] libmachine: (ha-797954-m04) Calling .GetSSHPort
	I0920 21:28:39.620967   37396 main.go:141] libmachine: (ha-797954-m04) Calling .GetSSHKeyPath
	I0920 21:28:39.621114   37396 main.go:141] libmachine: (ha-797954-m04) Calling .GetSSHUsername
	I0920 21:28:39.621253   37396 sshutil.go:53] new ssh client: &{IP:192.168.39.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954-m04/id_rsa Username:docker}
	I0920 21:28:39.709760   37396 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 21:28:39.764154   37396 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 21:28:39.817692   37396 main.go:141] libmachine: Stopping "ha-797954-m04"...
	I0920 21:28:39.817743   37396 main.go:141] libmachine: (ha-797954-m04) Calling .GetState
	I0920 21:28:39.819249   37396 main.go:141] libmachine: (ha-797954-m04) Calling .Stop
	I0920 21:28:39.822638   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 0/120
	I0920 21:28:40.823948   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 1/120
	I0920 21:28:41.825735   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 2/120
	I0920 21:28:42.827086   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 3/120
	I0920 21:28:43.828558   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 4/120
	I0920 21:28:44.830588   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 5/120
	I0920 21:28:45.832029   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 6/120
	I0920 21:28:46.833441   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 7/120
	I0920 21:28:47.834819   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 8/120
	I0920 21:28:48.836445   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 9/120
	I0920 21:28:49.838662   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 10/120
	I0920 21:28:50.839946   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 11/120
	I0920 21:28:51.841567   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 12/120
	I0920 21:28:52.842839   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 13/120
	I0920 21:28:53.844167   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 14/120
	I0920 21:28:54.845642   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 15/120
	I0920 21:28:55.846990   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 16/120
	I0920 21:28:56.848196   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 17/120
	I0920 21:28:57.849574   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 18/120
	I0920 21:28:58.850877   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 19/120
	I0920 21:28:59.853066   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 20/120
	I0920 21:29:00.854494   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 21/120
	I0920 21:29:01.855725   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 22/120
	I0920 21:29:02.857176   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 23/120
	I0920 21:29:03.858704   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 24/120
	I0920 21:29:04.860323   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 25/120
	I0920 21:29:05.861627   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 26/120
	I0920 21:29:06.862897   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 27/120
	I0920 21:29:07.864174   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 28/120
	I0920 21:29:08.865369   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 29/120
	I0920 21:29:09.867704   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 30/120
	I0920 21:29:10.869452   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 31/120
	I0920 21:29:11.871111   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 32/120
	I0920 21:29:12.873388   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 33/120
	I0920 21:29:13.875108   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 34/120
	I0920 21:29:14.877261   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 35/120
	I0920 21:29:15.878609   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 36/120
	I0920 21:29:16.880126   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 37/120
	I0920 21:29:17.881988   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 38/120
	I0920 21:29:18.883208   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 39/120
	I0920 21:29:19.885447   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 40/120
	I0920 21:29:20.887192   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 41/120
	I0920 21:29:21.888579   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 42/120
	I0920 21:29:22.890043   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 43/120
	I0920 21:29:23.891451   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 44/120
	I0920 21:29:24.893330   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 45/120
	I0920 21:29:25.894653   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 46/120
	I0920 21:29:26.896131   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 47/120
	I0920 21:29:27.897368   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 48/120
	I0920 21:29:28.898747   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 49/120
	I0920 21:29:29.901114   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 50/120
	I0920 21:29:30.902598   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 51/120
	I0920 21:29:31.904856   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 52/120
	I0920 21:29:32.906175   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 53/120
	I0920 21:29:33.907709   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 54/120
	I0920 21:29:34.909328   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 55/120
	I0920 21:29:35.910767   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 56/120
	I0920 21:29:36.912055   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 57/120
	I0920 21:29:37.913391   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 58/120
	I0920 21:29:38.915037   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 59/120
	I0920 21:29:39.917040   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 60/120
	I0920 21:29:40.918445   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 61/120
	I0920 21:29:41.920184   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 62/120
	I0920 21:29:42.921904   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 63/120
	I0920 21:29:43.923367   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 64/120
	I0920 21:29:44.925543   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 65/120
	I0920 21:29:45.926858   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 66/120
	I0920 21:29:46.928259   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 67/120
	I0920 21:29:47.929765   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 68/120
	I0920 21:29:48.931904   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 69/120
	I0920 21:29:49.934342   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 70/120
	I0920 21:29:50.936443   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 71/120
	I0920 21:29:51.937857   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 72/120
	I0920 21:29:52.939370   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 73/120
	I0920 21:29:53.940841   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 74/120
	I0920 21:29:54.942419   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 75/120
	I0920 21:29:55.943932   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 76/120
	I0920 21:29:56.946178   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 77/120
	I0920 21:29:57.947860   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 78/120
	I0920 21:29:58.949406   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 79/120
	I0920 21:29:59.951482   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 80/120
	I0920 21:30:00.953399   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 81/120
	I0920 21:30:01.955030   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 82/120
	I0920 21:30:02.956370   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 83/120
	I0920 21:30:03.957867   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 84/120
	I0920 21:30:04.959790   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 85/120
	I0920 21:30:05.961090   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 86/120
	I0920 21:30:06.963150   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 87/120
	I0920 21:30:07.965588   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 88/120
	I0920 21:30:08.966858   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 89/120
	I0920 21:30:09.968909   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 90/120
	I0920 21:30:10.970295   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 91/120
	I0920 21:30:11.972203   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 92/120
	I0920 21:30:12.974426   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 93/120
	I0920 21:30:13.975866   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 94/120
	I0920 21:30:14.977949   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 95/120
	I0920 21:30:15.979476   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 96/120
	I0920 21:30:16.980786   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 97/120
	I0920 21:30:17.982205   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 98/120
	I0920 21:30:18.983471   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 99/120
	I0920 21:30:19.985611   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 100/120
	I0920 21:30:20.987083   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 101/120
	I0920 21:30:21.988516   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 102/120
	I0920 21:30:22.989872   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 103/120
	I0920 21:30:23.991376   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 104/120
	I0920 21:30:24.993455   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 105/120
	I0920 21:30:25.994834   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 106/120
	I0920 21:30:26.996259   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 107/120
	I0920 21:30:27.997689   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 108/120
	I0920 21:30:28.999338   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 109/120
	I0920 21:30:30.001531   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 110/120
	I0920 21:30:31.003021   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 111/120
	I0920 21:30:32.004397   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 112/120
	I0920 21:30:33.005897   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 113/120
	I0920 21:30:34.007407   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 114/120
	I0920 21:30:35.009396   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 115/120
	I0920 21:30:36.010748   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 116/120
	I0920 21:30:37.012065   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 117/120
	I0920 21:30:38.013533   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 118/120
	I0920 21:30:39.014855   37396 main.go:141] libmachine: (ha-797954-m04) Waiting for machine to stop 119/120
	I0920 21:30:40.016031   37396 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 21:30:40.016108   37396 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 21:30:40.018099   37396 out.go:201] 
	W0920 21:30:40.019383   37396 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 21:30:40.019398   37396 out.go:270] * 
	* 
	W0920 21:30:40.022603   37396 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 21:30:40.023871   37396 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-797954 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Done: out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr: (19.017514563s)
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr": 
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr": 
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-797954 -n ha-797954
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-797954 logs -n 25: (1.769732376s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-797954 ssh -n ha-797954-m02 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m03_ha-797954-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04:/home/docker/cp-test_ha-797954-m03_ha-797954-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m04 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m03_ha-797954-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp testdata/cp-test.txt                                                | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3341505942/001/cp-test_ha-797954-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954:/home/docker/cp-test_ha-797954-m04_ha-797954.txt                       |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954 sudo cat                                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954.txt                                 |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m02:/home/docker/cp-test_ha-797954-m04_ha-797954-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m02 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m03:/home/docker/cp-test_ha-797954-m04_ha-797954-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n                                                                 | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | ha-797954-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-797954 ssh -n ha-797954-m03 sudo cat                                          | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC | 20 Sep 24 21:19 UTC |
	|         | /home/docker/cp-test_ha-797954-m04_ha-797954-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-797954 node stop m02 -v=7                                                     | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:19 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-797954 node start m02 -v=7                                                    | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-797954 -v=7                                                           | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-797954 -v=7                                                                | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-797954 --wait=true -v=7                                                    | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:24 UTC | 20 Sep 24 21:28 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-797954                                                                | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:28 UTC |                     |
	| node    | ha-797954 node delete m03 -v=7                                                   | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:28 UTC | 20 Sep 24 21:28 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-797954 stop -v=7                                                              | ha-797954 | jenkins | v1.34.0 | 20 Sep 24 21:28 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 21:24:15
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 21:24:15.649997   35620 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:24:15.650123   35620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:24:15.650132   35620 out.go:358] Setting ErrFile to fd 2...
	I0920 21:24:15.650137   35620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:24:15.650301   35620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 21:24:15.650821   35620 out.go:352] Setting JSON to false
	I0920 21:24:15.651748   35620 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4004,"bootTime":1726863452,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:24:15.651850   35620 start.go:139] virtualization: kvm guest
	I0920 21:24:15.654281   35620 out.go:177] * [ha-797954] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 21:24:15.655546   35620 notify.go:220] Checking for updates...
	I0920 21:24:15.655550   35620 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:24:15.656922   35620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:24:15.658275   35620 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:24:15.659432   35620 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:24:15.660478   35620 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:24:15.661534   35620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:24:15.663177   35620 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:24:15.663296   35620 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:24:15.663902   35620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:24:15.663951   35620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:24:15.680250   35620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36177
	I0920 21:24:15.680799   35620 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:24:15.681419   35620 main.go:141] libmachine: Using API Version  1
	I0920 21:24:15.681443   35620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:24:15.681786   35620 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:24:15.681994   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:24:15.718748   35620 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 21:24:15.720130   35620 start.go:297] selected driver: kvm2
	I0920 21:24:15.720154   35620 start.go:901] validating driver "kvm2" against &{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:24:15.720291   35620 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:24:15.720614   35620 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:24:15.720730   35620 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 21:24:15.736442   35620 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 21:24:15.737152   35620 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 21:24:15.737181   35620 cni.go:84] Creating CNI manager for ""
	I0920 21:24:15.737236   35620 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 21:24:15.737291   35620 start.go:340] cluster config:
	{Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:
false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:24:15.737429   35620 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:24:15.739501   35620 out.go:177] * Starting "ha-797954" primary control-plane node in "ha-797954" cluster
	I0920 21:24:15.740742   35620 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:24:15.740802   35620 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 21:24:15.740814   35620 cache.go:56] Caching tarball of preloaded images
	I0920 21:24:15.740880   35620 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 21:24:15.740890   35620 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 21:24:15.741034   35620 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/config.json ...
	I0920 21:24:15.741289   35620 start.go:360] acquireMachinesLock for ha-797954: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 21:24:15.741357   35620 start.go:364] duration metric: took 37.229µs to acquireMachinesLock for "ha-797954"
	I0920 21:24:15.741377   35620 start.go:96] Skipping create...Using existing machine configuration
	I0920 21:24:15.741386   35620 fix.go:54] fixHost starting: 
	I0920 21:24:15.741671   35620 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:24:15.741707   35620 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:24:15.756520   35620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0920 21:24:15.757002   35620 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:24:15.757461   35620 main.go:141] libmachine: Using API Version  1
	I0920 21:24:15.757480   35620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:24:15.757773   35620 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:24:15.757950   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:24:15.758095   35620 main.go:141] libmachine: (ha-797954) Calling .GetState
	I0920 21:24:15.759677   35620 fix.go:112] recreateIfNeeded on ha-797954: state=Running err=<nil>
	W0920 21:24:15.759694   35620 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 21:24:15.762367   35620 out.go:177] * Updating the running kvm2 "ha-797954" VM ...
	I0920 21:24:15.763919   35620 machine.go:93] provisionDockerMachine start ...
	I0920 21:24:15.763948   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:24:15.764182   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:24:15.766802   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:15.767220   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:15.767240   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:15.767421   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:24:15.767591   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:15.767717   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:15.767822   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:24:15.767993   35620 main.go:141] libmachine: Using SSH client type: native
	I0920 21:24:15.768216   35620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:24:15.768228   35620 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 21:24:15.880433   35620 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-797954
	
	I0920 21:24:15.880461   35620 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:24:15.880702   35620 buildroot.go:166] provisioning hostname "ha-797954"
	I0920 21:24:15.880732   35620 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:24:15.880936   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:24:15.884767   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:15.885429   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:15.885459   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:15.885900   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:24:15.886193   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:15.886416   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:15.886682   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:24:15.886889   35620 main.go:141] libmachine: Using SSH client type: native
	I0920 21:24:15.887085   35620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:24:15.887098   35620 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-797954 && echo "ha-797954" | sudo tee /etc/hostname
	I0920 21:24:16.012054   35620 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-797954
	
	I0920 21:24:16.012081   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:24:16.014890   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.015229   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:16.015253   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.015475   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:24:16.015629   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:16.015791   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:16.015924   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:24:16.016045   35620 main.go:141] libmachine: Using SSH client type: native
	I0920 21:24:16.016198   35620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:24:16.016221   35620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-797954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-797954/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-797954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 21:24:16.128043   35620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:24:16.128077   35620 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 21:24:16.128117   35620 buildroot.go:174] setting up certificates
	I0920 21:24:16.128133   35620 provision.go:84] configureAuth start
	I0920 21:24:16.128150   35620 main.go:141] libmachine: (ha-797954) Calling .GetMachineName
	I0920 21:24:16.128489   35620 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:24:16.131012   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.131406   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:16.131441   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.131599   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:24:16.133722   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.134074   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:16.134097   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.134223   35620 provision.go:143] copyHostCerts
	I0920 21:24:16.134259   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:24:16.134289   35620 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 21:24:16.134305   35620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:24:16.134375   35620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 21:24:16.134442   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:24:16.134459   35620 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 21:24:16.134465   35620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:24:16.134489   35620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 21:24:16.134527   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:24:16.134542   35620 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 21:24:16.134550   35620 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:24:16.134571   35620 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 21:24:16.134613   35620 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.ha-797954 san=[127.0.0.1 192.168.39.240 ha-797954 localhost minikube]
	I0920 21:24:16.651963   35620 provision.go:177] copyRemoteCerts
	I0920 21:24:16.652032   35620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 21:24:16.652056   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:24:16.655308   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.655743   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:16.655770   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.655998   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:24:16.656183   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:16.656342   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:24:16.656459   35620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:24:16.741854   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 21:24:16.741930   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0920 21:24:16.769896   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 21:24:16.769984   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 21:24:16.797560   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 21:24:16.797641   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 21:24:16.825493   35620 provision.go:87] duration metric: took 697.34474ms to configureAuth
	I0920 21:24:16.825525   35620 buildroot.go:189] setting minikube options for container-runtime
	I0920 21:24:16.825772   35620 config.go:182] Loaded profile config "ha-797954": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:24:16.825875   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:24:16.828713   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.829075   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:24:16.829097   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:24:16.829310   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:24:16.829521   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:16.829671   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:24:16.829790   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:24:16.829973   35620 main.go:141] libmachine: Using SSH client type: native
	I0920 21:24:16.830150   35620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:24:16.830165   35620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 21:25:47.606550   35620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 21:25:47.606585   35620 machine.go:96] duration metric: took 1m31.842645407s to provisionDockerMachine
	I0920 21:25:47.606601   35620 start.go:293] postStartSetup for "ha-797954" (driver="kvm2")
	I0920 21:25:47.606614   35620 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 21:25:47.606634   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:25:47.606928   35620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 21:25:47.606954   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:25:47.610404   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.610814   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:47.610840   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.611019   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:25:47.611211   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:25:47.611390   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:25:47.611500   35620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:25:47.699109   35620 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 21:25:47.704073   35620 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 21:25:47.704095   35620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 21:25:47.704156   35620 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 21:25:47.704260   35620 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 21:25:47.704279   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /etc/ssl/certs/167852.pem
	I0920 21:25:47.704365   35620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 21:25:47.714823   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:25:47.740542   35620 start.go:296] duration metric: took 133.929154ms for postStartSetup
	I0920 21:25:47.740580   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:25:47.740893   35620 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0920 21:25:47.740919   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:25:47.743695   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.744153   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:47.744192   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.744320   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:25:47.744490   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:25:47.744633   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:25:47.744745   35620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	W0920 21:25:47.830525   35620 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0920 21:25:47.830557   35620 fix.go:56] duration metric: took 1m32.089169854s for fixHost
	I0920 21:25:47.830583   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:25:47.833572   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.834040   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:47.834066   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.834292   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:25:47.834525   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:25:47.834759   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:25:47.834982   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:25:47.835170   35620 main.go:141] libmachine: Using SSH client type: native
	I0920 21:25:47.835375   35620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.240 22 <nil> <nil>}
	I0920 21:25:47.835386   35620 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 21:25:47.943954   35620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726867547.898946556
	
	I0920 21:25:47.943982   35620 fix.go:216] guest clock: 1726867547.898946556
	I0920 21:25:47.943990   35620 fix.go:229] Guest: 2024-09-20 21:25:47.898946556 +0000 UTC Remote: 2024-09-20 21:25:47.830565171 +0000 UTC m=+92.216677937 (delta=68.381385ms)
	I0920 21:25:47.944028   35620 fix.go:200] guest clock delta is within tolerance: 68.381385ms
	I0920 21:25:47.944039   35620 start.go:83] releasing machines lock for "ha-797954", held for 1m32.202669546s
	I0920 21:25:47.944064   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:25:47.944319   35620 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:25:47.947293   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.947673   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:47.947702   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.947863   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:25:47.948458   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:25:47.948626   35620 main.go:141] libmachine: (ha-797954) Calling .DriverName
	I0920 21:25:47.948696   35620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 21:25:47.948744   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:25:47.948837   35620 ssh_runner.go:195] Run: cat /version.json
	I0920 21:25:47.948871   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHHostname
	I0920 21:25:47.951364   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.951607   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.951823   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:47.951847   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.952030   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:25:47.952066   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:47.952090   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:47.952221   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:25:47.952297   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHPort
	I0920 21:25:47.952376   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:25:47.952448   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHKeyPath
	I0920 21:25:47.952513   35620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:25:47.952569   35620 main.go:141] libmachine: (ha-797954) Calling .GetSSHUsername
	I0920 21:25:47.952720   35620 sshutil.go:53] new ssh client: &{IP:192.168.39.240 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/ha-797954/id_rsa Username:docker}
	I0920 21:25:48.032466   35620 ssh_runner.go:195] Run: systemctl --version
	I0920 21:25:48.052519   35620 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 21:25:48.215416   35620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 21:25:48.222105   35620 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 21:25:48.222184   35620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 21:25:48.231627   35620 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 21:25:48.231653   35620 start.go:495] detecting cgroup driver to use...
	I0920 21:25:48.231724   35620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 21:25:48.247981   35620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 21:25:48.262652   35620 docker.go:217] disabling cri-docker service (if available) ...
	I0920 21:25:48.262724   35620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 21:25:48.276789   35620 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 21:25:48.290640   35620 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 21:25:48.443842   35620 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 21:25:48.596783   35620 docker.go:233] disabling docker service ...
	I0920 21:25:48.596871   35620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 21:25:48.614368   35620 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 21:25:48.630802   35620 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 21:25:48.782741   35620 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 21:25:48.932608   35620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 21:25:48.946785   35620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 21:25:48.966742   35620 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 21:25:48.966803   35620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:48.977971   35620 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 21:25:48.978038   35620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:48.989131   35620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:48.999976   35620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:49.011730   35620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 21:25:49.022492   35620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:49.034804   35620 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:49.046484   35620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:25:49.057266   35620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 21:25:49.067541   35620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 21:25:49.077076   35620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:25:49.219760   35620 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 21:25:51.603405   35620 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.383607239s)
	I0920 21:25:51.603435   35620 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 21:25:51.603481   35620 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 21:25:51.608503   35620 start.go:563] Will wait 60s for crictl version
	I0920 21:25:51.608556   35620 ssh_runner.go:195] Run: which crictl
	I0920 21:25:51.612692   35620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 21:25:51.652796   35620 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 21:25:51.652866   35620 ssh_runner.go:195] Run: crio --version
	I0920 21:25:51.682397   35620 ssh_runner.go:195] Run: crio --version
	I0920 21:25:51.713410   35620 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 21:25:51.714486   35620 main.go:141] libmachine: (ha-797954) Calling .GetIP
	I0920 21:25:51.717401   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:51.717772   35620 main.go:141] libmachine: (ha-797954) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:bb:49", ip: ""} in network mk-ha-797954: {Iface:virbr1 ExpiryTime:2024-09-20 22:15:12 +0000 UTC Type:0 Mac:52:54:00:62:bb:49 Iaid: IPaddr:192.168.39.240 Prefix:24 Hostname:ha-797954 Clientid:01:52:54:00:62:bb:49}
	I0920 21:25:51.717796   35620 main.go:141] libmachine: (ha-797954) DBG | domain ha-797954 has defined IP address 192.168.39.240 and MAC address 52:54:00:62:bb:49 in network mk-ha-797954
	I0920 21:25:51.718081   35620 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 21:25:51.722824   35620 kubeadm.go:883] updating cluster {Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 21:25:51.723005   35620 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:25:51.723054   35620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:25:51.765779   35620 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 21:25:51.765803   35620 crio.go:433] Images already preloaded, skipping extraction
	I0920 21:25:51.765847   35620 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:25:51.800197   35620 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 21:25:51.800220   35620 cache_images.go:84] Images are preloaded, skipping loading
	I0920 21:25:51.800229   35620 kubeadm.go:934] updating node { 192.168.39.240 8443 v1.31.1 crio true true} ...
	I0920 21:25:51.800346   35620 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-797954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.240
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 21:25:51.800429   35620 ssh_runner.go:195] Run: crio config
	I0920 21:25:51.857184   35620 cni.go:84] Creating CNI manager for ""
	I0920 21:25:51.857208   35620 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0920 21:25:51.857220   35620 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 21:25:51.857240   35620 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.240 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-797954 NodeName:ha-797954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.240"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.240 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 21:25:51.857381   35620 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.240
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-797954"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.240
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.240"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 21:25:51.857400   35620 kube-vip.go:115] generating kube-vip config ...
	I0920 21:25:51.857438   35620 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0920 21:25:51.869355   35620 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0920 21:25:51.869487   35620 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0920 21:25:51.869556   35620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 21:25:51.879747   35620 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 21:25:51.879809   35620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0920 21:25:51.889590   35620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0920 21:25:51.907377   35620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 21:25:51.925001   35620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0920 21:25:51.942169   35620 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0920 21:25:51.961886   35620 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0920 21:25:51.965834   35620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:25:52.120260   35620 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:25:52.140180   35620 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954 for IP: 192.168.39.240
	I0920 21:25:52.140203   35620 certs.go:194] generating shared ca certs ...
	I0920 21:25:52.140219   35620 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:25:52.140414   35620 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 21:25:52.140465   35620 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 21:25:52.140475   35620 certs.go:256] generating profile certs ...
	I0920 21:25:52.140553   35620 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/client.key
	I0920 21:25:52.140578   35620 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9df47489
	I0920 21:25:52.140590   35620 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9df47489 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.240 192.168.39.63 192.168.39.108 192.168.39.254]
	I0920 21:25:52.340380   35620 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9df47489 ...
	I0920 21:25:52.340411   35620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9df47489: {Name:mka8fc944fe2dbf9c2642397fdc5913effc5ae98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:25:52.340598   35620 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9df47489 ...
	I0920 21:25:52.340613   35620 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9df47489: {Name:mk782cfcf686f75aaf2a141659862084d03c9680 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:25:52.340708   35620 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt.9df47489 -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt
	I0920 21:25:52.340930   35620 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key.9df47489 -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key
	I0920 21:25:52.341071   35620 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key
	I0920 21:25:52.341086   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 21:25:52.341098   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 21:25:52.341109   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 21:25:52.341122   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 21:25:52.341135   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 21:25:52.341146   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 21:25:52.341161   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 21:25:52.341173   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 21:25:52.341221   35620 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 21:25:52.341261   35620 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 21:25:52.341270   35620 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 21:25:52.341289   35620 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 21:25:52.341314   35620 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 21:25:52.341340   35620 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 21:25:52.341379   35620 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:25:52.341430   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem -> /usr/share/ca-certificates/16785.pem
	I0920 21:25:52.341443   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /usr/share/ca-certificates/167852.pem
	I0920 21:25:52.341453   35620 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:25:52.341982   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 21:25:52.368056   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 21:25:52.393457   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 21:25:52.419144   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 21:25:52.445333   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 21:25:52.470934   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 21:25:52.497933   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 21:25:52.522974   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/ha-797954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 21:25:52.547756   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 21:25:52.572376   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 21:25:52.597981   35620 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 21:25:52.623093   35620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 21:25:52.640555   35620 ssh_runner.go:195] Run: openssl version
	I0920 21:25:52.646929   35620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 21:25:52.658241   35620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 21:25:52.662706   35620 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:25:52.662758   35620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 21:25:52.668333   35620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 21:25:52.677996   35620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 21:25:52.688886   35620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:25:52.693503   35620 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:25:52.693552   35620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:25:52.699383   35620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 21:25:52.709505   35620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 21:25:52.720468   35620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 21:25:52.725056   35620 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:25:52.725108   35620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 21:25:52.730838   35620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 21:25:52.740207   35620 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:25:52.744843   35620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 21:25:52.750439   35620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 21:25:52.756421   35620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 21:25:52.762115   35620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 21:25:52.767749   35620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 21:25:52.773331   35620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 21:25:52.778989   35620 kubeadm.go:392] StartCluster: {Name:ha-797954 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-797954 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.240 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.63 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.108 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.210 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:25:52.779124   35620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 21:25:52.779164   35620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 21:25:52.816983   35620 cri.go:89] found id: "a959d781d8bf0adb294844a603047f84c05f68a635b7d1871e3bf12e35241a92"
	I0920 21:25:52.817004   35620 cri.go:89] found id: "91b817752376881be1a3e8e15845575f07c5809aa959117fb187c3e302af6c64"
	I0920 21:25:52.817008   35620 cri.go:89] found id: "4fef15b7cfc35ef22719ef72675914a19a60d5438876eb0bd4d9418e7b9dbf3a"
	I0920 21:25:52.817011   35620 cri.go:89] found id: "e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50"
	I0920 21:25:52.817014   35620 cri.go:89] found id: "0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab"
	I0920 21:25:52.817016   35620 cri.go:89] found id: "0fdd2277dc9929f215b6bde1daa034340fc51bb55719eb11ed1f9439a2b2db79"
	I0920 21:25:52.817019   35620 cri.go:89] found id: "d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2"
	I0920 21:25:52.817027   35620 cri.go:89] found id: "6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f"
	I0920 21:25:52.817030   35620 cri.go:89] found id: "371dadd621425046ff42d7cf186cfafa4a718ac7224608c5e6d36c9066d465cb"
	I0920 21:25:52.817035   35620 cri.go:89] found id: "34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463"
	I0920 21:25:52.817037   35620 cri.go:89] found id: "613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e"
	I0920 21:25:52.817042   35620 cri.go:89] found id: "3ec2d3bd57d8bc3ccd006dc4264e8ab141520abc91091151173106f57bbbf360"
	I0920 21:25:52.817044   35620 cri.go:89] found id: "0d975d52ea29267b26e7e5176e0f663068eaca38855ecb6dd786afd1aad53674"
	I0920 21:25:52.817047   35620 cri.go:89] found id: ""
	I0920 21:25:52.817085   35620 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.707919332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867859707892958,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8d08101-658e-4a6e-b646-fcd65f6477bb name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.708433491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d43ff8bd-8bd1-4681-b8da-3ca029d7ba22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.708553483Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d43ff8bd-8bd1-4681-b8da-3ca029d7ba22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.708942446Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01d94fc81760189b0e8d6d650f1194c21d9c83c12dd57eaf8bd4552d8da97206,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726867636138693324,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63327c191bb5e5a2d1a93611fcf6ae00e68dbd669c5c13f30880ddd4120905d5,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726867603139709373,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9acf7608b1c89f78f8a30b337bbab3b8983f18408bff7726791dd7fea2c966dd,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726867599143105635,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb1af3ebd34a5c7f6e2f5387efdf51ea65b5ff12fda14bd42cf361db7412fee9,PodSandboxId:73c1f3b73fe01fd8e86744bf2f11cdcee5d08652b15989b0ce428deb091f10ed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867592721231801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf0f3f29d025cc94ed6c93e86a3feda794f2ea39af18279b0bb506f5f56c83a,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726867584141366388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a005f08f242ab1da69f85d6f27698aa173d4d173877618b4a6fab5659c3e75,PodSandboxId:5bc2ff4eaee7a6682e5a28894c7b01b5fc64fc88372077da070d4ac36ea39b9f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726867573690681571,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f66e1b3049ba1249b5133e70b8cda4c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4945a06bdca1b2b38dab867c97225dae17f62ecd3304af5f57cb805cbf890ccd,PodSandboxId:718361b08fad6d8bab9962fd41ff8096345e56718e09e37876dc0222045b5f16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726867559417252842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:c5ae7c3c8fc8a02e9ccbadaccb0e916ad225a3d05c746d2c55e00cf8fb447e78,PodSandboxId:9b769b2f2ea7cbb30f89fcb9e3ee2fcbf29e87de6def73062892e2eff50855fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726867559377741103,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c08b5b87
2427f1a5a24f544b9f33bfeb68f7ab9f51e9dbcb0bc2e8a49f3f0fd2,PodSandboxId:18724a1cd47b218a632de6b9fda123a5b1698d9f06e6af1e32d22f56de3bfa25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867559104785697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2b308172eed90bdd95877e35af5cedf81ab84ef5922e67df30c5085ea7fee3,PodSandboxId:a091191b9d39f86ec084bdf98669934e8d91c1aebfda4c7521e0b15240bcd1dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726867559033958824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d38acbb961f53cbc2e1941f21bfada4d86ed3343c926314f7f73e63dad87760,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726867558966601319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6084f1904ab2d42dd56300e192a8932b203f4274d48b0ea823a48acc3ea3419,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726867558880935737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d34f2b26411d6ad84d8e5b7e9a72a27925db5d7eb2207539073fc7cafe36005,PodSandboxId:bee79a999f9f07a583c52f819764d1aa6d6f0d979e3a7f55f67f5065d82e70f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726867558859384851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79815991ab9ec9c10f0200369572d07476dca30893372af11ac070df4e2a877c,PodSandboxId:bf170f47ff010b86aff8ab363182805a8210b59d4ee2289abb166ea50cfa351f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867553596407180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726867101601398164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958883599803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958850709309,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726866946745800384,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726866946560939000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726866934980318430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726866934988572235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d43ff8bd-8bd1-4681-b8da-3ca029d7ba22 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.756309699Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26128caa-4e24-4d8a-adf1-856c83ec8c43 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.756408517Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26128caa-4e24-4d8a-adf1-856c83ec8c43 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.758331455Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e39e730b-bb25-4367-b82f-d241043a1a9e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.759407426Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867859759382268,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e39e730b-bb25-4367-b82f-d241043a1a9e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.760054609Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af840641-bfe9-44e0-8422-0c4668fda676 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.760134400Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af840641-bfe9-44e0-8422-0c4668fda676 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.760688771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01d94fc81760189b0e8d6d650f1194c21d9c83c12dd57eaf8bd4552d8da97206,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726867636138693324,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63327c191bb5e5a2d1a93611fcf6ae00e68dbd669c5c13f30880ddd4120905d5,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726867603139709373,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9acf7608b1c89f78f8a30b337bbab3b8983f18408bff7726791dd7fea2c966dd,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726867599143105635,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb1af3ebd34a5c7f6e2f5387efdf51ea65b5ff12fda14bd42cf361db7412fee9,PodSandboxId:73c1f3b73fe01fd8e86744bf2f11cdcee5d08652b15989b0ce428deb091f10ed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867592721231801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf0f3f29d025cc94ed6c93e86a3feda794f2ea39af18279b0bb506f5f56c83a,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726867584141366388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a005f08f242ab1da69f85d6f27698aa173d4d173877618b4a6fab5659c3e75,PodSandboxId:5bc2ff4eaee7a6682e5a28894c7b01b5fc64fc88372077da070d4ac36ea39b9f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726867573690681571,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f66e1b3049ba1249b5133e70b8cda4c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4945a06bdca1b2b38dab867c97225dae17f62ecd3304af5f57cb805cbf890ccd,PodSandboxId:718361b08fad6d8bab9962fd41ff8096345e56718e09e37876dc0222045b5f16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726867559417252842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:c5ae7c3c8fc8a02e9ccbadaccb0e916ad225a3d05c746d2c55e00cf8fb447e78,PodSandboxId:9b769b2f2ea7cbb30f89fcb9e3ee2fcbf29e87de6def73062892e2eff50855fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726867559377741103,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c08b5b87
2427f1a5a24f544b9f33bfeb68f7ab9f51e9dbcb0bc2e8a49f3f0fd2,PodSandboxId:18724a1cd47b218a632de6b9fda123a5b1698d9f06e6af1e32d22f56de3bfa25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867559104785697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2b308172eed90bdd95877e35af5cedf81ab84ef5922e67df30c5085ea7fee3,PodSandboxId:a091191b9d39f86ec084bdf98669934e8d91c1aebfda4c7521e0b15240bcd1dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726867559033958824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d38acbb961f53cbc2e1941f21bfada4d86ed3343c926314f7f73e63dad87760,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726867558966601319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6084f1904ab2d42dd56300e192a8932b203f4274d48b0ea823a48acc3ea3419,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726867558880935737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d34f2b26411d6ad84d8e5b7e9a72a27925db5d7eb2207539073fc7cafe36005,PodSandboxId:bee79a999f9f07a583c52f819764d1aa6d6f0d979e3a7f55f67f5065d82e70f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726867558859384851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79815991ab9ec9c10f0200369572d07476dca30893372af11ac070df4e2a877c,PodSandboxId:bf170f47ff010b86aff8ab363182805a8210b59d4ee2289abb166ea50cfa351f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867553596407180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726867101601398164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958883599803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958850709309,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726866946745800384,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726866946560939000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726866934980318430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726866934988572235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af840641-bfe9-44e0-8422-0c4668fda676 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.803637536Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9039ed0e-f333-4915-826f-dd826c01384c name=/runtime.v1.RuntimeService/Version
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.803735510Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9039ed0e-f333-4915-826f-dd826c01384c name=/runtime.v1.RuntimeService/Version
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.805163432Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a13af51d-893d-40fe-a4c1-059f6d219988 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.805761730Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867859805734644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a13af51d-893d-40fe-a4c1-059f6d219988 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.806264736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c6004df-f08a-43c8-b21e-7404890fd6c6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.806341519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c6004df-f08a-43c8-b21e-7404890fd6c6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.806780005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01d94fc81760189b0e8d6d650f1194c21d9c83c12dd57eaf8bd4552d8da97206,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726867636138693324,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63327c191bb5e5a2d1a93611fcf6ae00e68dbd669c5c13f30880ddd4120905d5,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726867603139709373,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9acf7608b1c89f78f8a30b337bbab3b8983f18408bff7726791dd7fea2c966dd,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726867599143105635,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb1af3ebd34a5c7f6e2f5387efdf51ea65b5ff12fda14bd42cf361db7412fee9,PodSandboxId:73c1f3b73fe01fd8e86744bf2f11cdcee5d08652b15989b0ce428deb091f10ed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867592721231801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf0f3f29d025cc94ed6c93e86a3feda794f2ea39af18279b0bb506f5f56c83a,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726867584141366388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a005f08f242ab1da69f85d6f27698aa173d4d173877618b4a6fab5659c3e75,PodSandboxId:5bc2ff4eaee7a6682e5a28894c7b01b5fc64fc88372077da070d4ac36ea39b9f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726867573690681571,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f66e1b3049ba1249b5133e70b8cda4c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4945a06bdca1b2b38dab867c97225dae17f62ecd3304af5f57cb805cbf890ccd,PodSandboxId:718361b08fad6d8bab9962fd41ff8096345e56718e09e37876dc0222045b5f16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726867559417252842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:c5ae7c3c8fc8a02e9ccbadaccb0e916ad225a3d05c746d2c55e00cf8fb447e78,PodSandboxId:9b769b2f2ea7cbb30f89fcb9e3ee2fcbf29e87de6def73062892e2eff50855fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726867559377741103,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c08b5b87
2427f1a5a24f544b9f33bfeb68f7ab9f51e9dbcb0bc2e8a49f3f0fd2,PodSandboxId:18724a1cd47b218a632de6b9fda123a5b1698d9f06e6af1e32d22f56de3bfa25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867559104785697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2b308172eed90bdd95877e35af5cedf81ab84ef5922e67df30c5085ea7fee3,PodSandboxId:a091191b9d39f86ec084bdf98669934e8d91c1aebfda4c7521e0b15240bcd1dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726867559033958824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d38acbb961f53cbc2e1941f21bfada4d86ed3343c926314f7f73e63dad87760,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726867558966601319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6084f1904ab2d42dd56300e192a8932b203f4274d48b0ea823a48acc3ea3419,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726867558880935737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d34f2b26411d6ad84d8e5b7e9a72a27925db5d7eb2207539073fc7cafe36005,PodSandboxId:bee79a999f9f07a583c52f819764d1aa6d6f0d979e3a7f55f67f5065d82e70f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726867558859384851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79815991ab9ec9c10f0200369572d07476dca30893372af11ac070df4e2a877c,PodSandboxId:bf170f47ff010b86aff8ab363182805a8210b59d4ee2289abb166ea50cfa351f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867553596407180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726867101601398164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958883599803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958850709309,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726866946745800384,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726866946560939000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726866934980318430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726866934988572235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1c6004df-f08a-43c8-b21e-7404890fd6c6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.851634230Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a7eb958f-60e5-4563-b8a8-ebdd14d58db6 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.851722154Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a7eb958f-60e5-4563-b8a8-ebdd14d58db6 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.853282667Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c8fc1916-18d5-49ce-aeb5-0810a06c8861 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.853799688Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867859853775385,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c8fc1916-18d5-49ce-aeb5-0810a06c8861 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.854310235Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=524d35cc-05f3-4d1d-84b3-72cc149aeac8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.854364623Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=524d35cc-05f3-4d1d-84b3-72cc149aeac8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:30:59 ha-797954 crio[3600]: time="2024-09-20 21:30:59.854816247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:01d94fc81760189b0e8d6d650f1194c21d9c83c12dd57eaf8bd4552d8da97206,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726867636138693324,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:63327c191bb5e5a2d1a93611fcf6ae00e68dbd669c5c13f30880ddd4120905d5,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726867603139709373,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9acf7608b1c89f78f8a30b337bbab3b8983f18408bff7726791dd7fea2c966dd,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726867599143105635,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb1af3ebd34a5c7f6e2f5387efdf51ea65b5ff12fda14bd42cf361db7412fee9,PodSandboxId:73c1f3b73fe01fd8e86744bf2f11cdcee5d08652b15989b0ce428deb091f10ed,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726867592721231801,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdf0f3f29d025cc94ed6c93e86a3feda794f2ea39af18279b0bb506f5f56c83a,PodSandboxId:15438a46df747b2417af80dcaf7ff679882a14c8b4912d00697f6e1258b3839f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726867584141366388,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05385037-e9ea-4afb-a256-3bfae42e00f5,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24a005f08f242ab1da69f85d6f27698aa173d4d173877618b4a6fab5659c3e75,PodSandboxId:5bc2ff4eaee7a6682e5a28894c7b01b5fc64fc88372077da070d4ac36ea39b9f,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1726867573690681571,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f66e1b3049ba1249b5133e70b8cda4c3,},Annotations:map[string]string{io.kubernetes.container.hash: 3123ec07,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4945a06bdca1b2b38dab867c97225dae17f62ecd3304af5f57cb805cbf890ccd,PodSandboxId:718361b08fad6d8bab9962fd41ff8096345e56718e09e37876dc0222045b5f16,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726867559417252842,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:c5ae7c3c8fc8a02e9ccbadaccb0e916ad225a3d05c746d2c55e00cf8fb447e78,PodSandboxId:9b769b2f2ea7cbb30f89fcb9e3ee2fcbf29e87de6def73062892e2eff50855fd,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726867559377741103,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c08b5b87
2427f1a5a24f544b9f33bfeb68f7ab9f51e9dbcb0bc2e8a49f3f0fd2,PodSandboxId:18724a1cd47b218a632de6b9fda123a5b1698d9f06e6af1e32d22f56de3bfa25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867559104785697,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd2b308172eed90bdd95877e35af5cedf81ab84ef5922e67df30c5085ea7fee3,PodSandboxId:a091191b9d39f86ec084bdf98669934e8d91c1aebfda4c7521e0b15240bcd1dd,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726867559033958824,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /de
v/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d38acbb961f53cbc2e1941f21bfada4d86ed3343c926314f7f73e63dad87760,PodSandboxId:45584b0106ac021002185b6bbb9c59e6429d44d498a50ca25251fb0f94af7d8d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726867558966601319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebd1504729b727080ceef6508f6ccb35,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6084f1904ab2d42dd56300e192a8932b203f4274d48b0ea823a48acc3ea3419,PodSandboxId:ee726ff249d4f3843fd9f9ca8cb784cd129cbc92c2682ca464a578289ee058cc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726867558880935737,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b484e47a7571389d184140197721981a,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log
,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d34f2b26411d6ad84d8e5b7e9a72a27925db5d7eb2207539073fc7cafe36005,PodSandboxId:bee79a999f9f07a583c52f819764d1aa6d6f0d979e3a7f55f67f5065d82e70f7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726867558859384851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79815991ab9ec9c10f0200369572d07476dca30893372af11ac070df4e2a877c,PodSandboxId:bf170f47ff010b86aff8ab363182805a8210b59d4ee2289abb166ea50cfa351f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726867553596407180,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:468d1e1f7ac8a4da5e205e8c89b93332c6f183cdea3f321a6312615736759fcd,PodSandboxId:d6db457e8bbe85c6e41df617340b8686c1a34807e46736ae727589954288d5af,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726867101601398164,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-44q29,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 443788ae-8329-43bd-819c-a80e37254cf3,},Annot
ations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50,PodSandboxId:48e2a20451c46f36a6abe182959063661e3e73cea069b5c73bd6f8b2da99976a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958883599803,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-djhfr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a8ca35db-3f4c-4e4c-a9e4-e12abecff539,},Annotations:map[string]string{io.kube
rnetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab,PodSandboxId:891e1332d4c2d9bf9798a33160a121309207239e14a004e8f60cacc3acfb82e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726866958850709309,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-bkcf8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d5aca96-de3a-4154-b195-b69aa45e2cfc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2,PodSandboxId:794a246d61681ebbdea232e6fbeca9ef4cc25a0d0c82dde8ac7d7b282d51387c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,Runti
meHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726866946745800384,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-6lk2f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddd6c393-68ce-4fff-9f0d-6e0147e03926,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f,PodSandboxId:457a98ab41d5f8101dff25e77566f290b6aff09becd9151be37c0f145ab09853,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3a
d6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726866946560939000,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qx9ph,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d20fb2-8eff-4921-add4-ee36b8fba334,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e,PodSandboxId:388cd6346f6a755d5f07fd0e238b55cb93ef5e7779cdd83c388201f332e6e1b7,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792
cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726866934980318430,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b34937b72deaf3e5b81f6757b9ac60f,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463,PodSandboxId:0e7df03407c5d5ceaa8bd9a7b1ef67532ba11bb24bbb0eb9baa11973663324da,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedA
t:1726866934988572235,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-797954,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0614470810fc3ee09afc2f16488a5cb3,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=524d35cc-05f3-4d1d-84b3-72cc149aeac8 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	01d94fc817601       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   15438a46df747       storage-provisioner
	63327c191bb5e       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   2                   45584b0106ac0       kube-controller-manager-ha-797954
	9acf7608b1c89       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            3                   ee726ff249d4f       kube-apiserver-ha-797954
	cb1af3ebd34a5       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   73c1f3b73fe01       busybox-7dff88458-44q29
	cdf0f3f29d025       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   15438a46df747       storage-provisioner
	24a005f08f242       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   5bc2ff4eaee7a       kube-vip-ha-797954
	4945a06bdca1b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      5 minutes ago       Running             kube-proxy                1                   718361b08fad6       kube-proxy-qx9ph
	c5ae7c3c8fc8a       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      5 minutes ago       Running             kindnet-cni               1                   9b769b2f2ea7c       kindnet-6lk2f
	c08b5b872427f       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   18724a1cd47b2       coredns-7c65d6cfc9-djhfr
	bd2b308172eed       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      5 minutes ago       Running             etcd                      1                   a091191b9d39f       etcd-ha-797954
	0d38acbb961f5       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      5 minutes ago       Exited              kube-controller-manager   1                   45584b0106ac0       kube-controller-manager-ha-797954
	b6084f1904ab2       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      5 minutes ago       Exited              kube-apiserver            2                   ee726ff249d4f       kube-apiserver-ha-797954
	8d34f2b26411d       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      5 minutes ago       Running             kube-scheduler            1                   bee79a999f9f0       kube-scheduler-ha-797954
	79815991ab9ec       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   1                   bf170f47ff010       coredns-7c65d6cfc9-bkcf8
	468d1e1f7ac8a       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   12 minutes ago      Exited              busybox                   0                   d6db457e8bbe8       busybox-7dff88458-44q29
	e3c9e6da67286       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   48e2a20451c46       coredns-7c65d6cfc9-djhfr
	0e272e92406da       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      15 minutes ago      Exited              coredns                   0                   891e1332d4c2d       coredns-7c65d6cfc9-bkcf8
	d34d703bb77f7       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      15 minutes ago      Exited              kindnet-cni               0                   794a246d61681       kindnet-6lk2f
	6367518948d4f       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      15 minutes ago      Exited              kube-proxy                0                   457a98ab41d5f       kube-proxy-qx9ph
	34e481bafb0d8       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      15 minutes ago      Exited              kube-scheduler            0                   0e7df03407c5d       kube-scheduler-ha-797954
	613c4c00096cd       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      15 minutes ago      Exited              etcd                      0                   388cd6346f6a7       etcd-ha-797954
	
	
	==> coredns [0e272e92406dac1c42aa073982dccbd65d08e6aed4919cfcc1a72af243755bab] <==
	[INFO] 10.244.0.4:37051 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.021353644s
	[INFO] 10.244.0.4:56814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00025486s
	[INFO] 10.244.0.4:60619 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004011357s
	[INFO] 10.244.0.4:45781 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000219299s
	[INFO] 10.244.1.2:58895 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0003827s
	[INFO] 10.244.1.2:52799 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000348157s
	[INFO] 10.244.1.2:46521 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000146401s
	[INFO] 10.244.1.2:58888 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000279379s
	[INFO] 10.244.2.2:45400 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149773s
	[INFO] 10.244.2.2:42656 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000284841s
	[INFO] 10.244.0.4:39831 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168662s
	[INFO] 10.244.1.2:33336 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145322s
	[INFO] 10.244.1.2:44953 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000245865s
	[INFO] 10.244.2.2:58673 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000183778s
	[INFO] 10.244.2.2:55978 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000198561s
	[INFO] 10.244.1.2:49222 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000209152s
	[INFO] 10.244.1.2:57989 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000183561s
	[INFO] 10.244.1.2:57201 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00017075s
	[INFO] 10.244.2.2:46144 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000184056s
	[INFO] 10.244.2.2:53312 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152142s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1720&timeout=5m5s&timeoutSeconds=305&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1726&timeout=8m52s&timeoutSeconds=532&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1760&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [79815991ab9ec9c10f0200369572d07476dca30893372af11ac070df4e2a877c] <==
	[INFO] plugin/kubernetes: Trace[1058542172]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 21:26:00.738) (total time: 10001ms):
	Trace[1058542172]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:26:10.740)
	Trace[1058542172]: [10.001950418s] [10.001950418s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1665169565]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 21:26:00.868) (total time: 10001ms):
	Trace[1665169565]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (21:26:10.870)
	Trace[1665169565]: [10.001779074s] [10.001779074s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:47780->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.5:47780->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42526->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:42526->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c08b5b872427f1a5a24f544b9f33bfeb68f7ab9f51e9dbcb0bc2e8a49f3f0fd2] <==
	[INFO] plugin/kubernetes: Trace[320100368]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 21:26:10.700) (total time: 11692ms):
	Trace[320100368]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40994->10.96.0.1:443: read: connection reset by peer 11691ms (21:26:22.392)
	Trace[320100368]: [11.692140338s] [11.692140338s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40994->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40984->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[770252672]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (20-Sep-2024 21:26:10.577) (total time: 11815ms):
	Trace[770252672]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40984->10.96.0.1:443: read: connection reset by peer 11814ms (21:26:22.392)
	Trace[770252672]: [11.815798267s] [11.815798267s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.6:40984->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e3c9e6da67286753e0227cd056c3b81db7f87e000f04713451c0bc3594ee3e50] <==
	[INFO] 10.244.1.2:50084 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000173888s
	[INFO] 10.244.2.2:46588 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000281143s
	[INFO] 10.244.2.2:49861 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002195678s
	[INFO] 10.244.2.2:43162 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194152s
	[INFO] 10.244.2.2:38062 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149126s
	[INFO] 10.244.2.2:46161 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001726482s
	[INFO] 10.244.2.2:38358 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105606s
	[INFO] 10.244.0.4:34794 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107864s
	[INFO] 10.244.0.4:36461 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073572s
	[INFO] 10.244.0.4:59058 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059888s
	[INFO] 10.244.1.2:33254 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111604s
	[INFO] 10.244.1.2:55753 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094078s
	[INFO] 10.244.2.2:46481 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128659s
	[INFO] 10.244.2.2:36242 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000192291s
	[INFO] 10.244.0.4:45612 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142384s
	[INFO] 10.244.0.4:32977 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000240882s
	[INFO] 10.244.0.4:36320 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000157906s
	[INFO] 10.244.0.4:57209 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000121261s
	[INFO] 10.244.1.2:52740 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000116141s
	[INFO] 10.244.2.2:45858 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145799s
	[INFO] 10.244.2.2:32912 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000224434s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1726&timeout=9m49s&timeoutSeconds=589&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1726&timeout=7m5s&timeoutSeconds=425&watch=true": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> describe nodes <==
	Name:               ha-797954
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T21_15_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:15:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:31:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:26:42 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:26:42 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:26:42 +0000   Fri, 20 Sep 2024 21:15:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:26:42 +0000   Fri, 20 Sep 2024 21:15:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.240
	  Hostname:    ha-797954
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ee8e79db7754067acc26d8f79541cf4
	  System UUID:                1ee8e79d-b775-4067-acc2-6d8f79541cf4
	  Boot ID:                    2a01d700-0c79-4cac-934b-ff091b3f743b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-44q29              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-bkcf8             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 coredns-7c65d6cfc9-djhfr             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     15m
	  kube-system                 etcd-ha-797954                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         15m
	  kube-system                 kindnet-6lk2f                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-apiserver-ha-797954             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-797954    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-qx9ph                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-797954             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-797954                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m18s                  kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  15m                    kubelet          Node ha-797954 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                    kubelet          Node ha-797954 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m                    kubelet          Node ha-797954 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           15m                    node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal   NodeReady                15m                    kubelet          Node ha-797954 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Warning  ContainerGCFailed        5m19s (x2 over 6m19s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             5m7s (x3 over 5m57s)   kubelet          Node ha-797954 status is now: NodeNotReady
	  Normal   RegisteredNode           4m21s                  node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal   RegisteredNode           4m15s                  node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-797954 event: Registered Node ha-797954 in Controller
	
	
	Name:               ha-797954-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_16_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:16:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:30:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:27:24 +0000   Fri, 20 Sep 2024 21:26:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:27:24 +0000   Fri, 20 Sep 2024 21:26:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:27:24 +0000   Fri, 20 Sep 2024 21:26:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:27:24 +0000   Fri, 20 Sep 2024 21:26:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.63
	  Hostname:    ha-797954-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 b99cf888073c415c8a60d9eed0a33a77
	  System UUID:                b99cf888-073c-415c-8a60-d9eed0a33a77
	  Boot ID:                    35be719f-c336-4c56-98ca-55749fdadbd7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-5m4df                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-797954-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kindnet-qh5rt                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-apiserver-ha-797954-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-797954-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-2pjxq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-797954-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-797954-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-797954-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-797954-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-797954-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  NodeNotReady             10m                    node-controller  Node ha-797954-m02 status is now: NodeNotReady
	  Normal  Starting                 4m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m45s (x8 over 4m45s)  kubelet          Node ha-797954-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s (x8 over 4m45s)  kubelet          Node ha-797954-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s (x7 over 4m45s)  kubelet          Node ha-797954-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           4m15s                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	  Normal  RegisteredNode           3m16s                  node-controller  Node ha-797954-m02 event: Registered Node ha-797954-m02 in Controller
	
	
	Name:               ha-797954-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-797954-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=ha-797954
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_18_56_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:18:56 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-797954-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:28:32 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 21:28:12 +0000   Fri, 20 Sep 2024 21:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 21:28:12 +0000   Fri, 20 Sep 2024 21:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 21:28:12 +0000   Fri, 20 Sep 2024 21:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 21:28:12 +0000   Fri, 20 Sep 2024 21:29:14 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.210
	  Hostname:    ha-797954-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 301819962cda48ca9fc543396f387646
	  System UUID:                30181996-2cda-48ca-9fc5-43396f387646
	  Boot ID:                    517f7625-9dbd-4fa7-9ee7-6452f1cfd966
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-j4drs    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kindnet-5zw6l              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      12m
	  kube-system                 kube-proxy-ghvv9           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-797954-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-797954-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-797954-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal   NodeReady                11m                    kubelet          Node ha-797954-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m21s                  node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal   RegisteredNode           4m15s                  node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Normal   RegisteredNode           3m16s                  node-controller  Node ha-797954-m04 event: Registered Node ha-797954-m04 in Controller
	  Warning  Rebooted                 2m48s (x3 over 2m48s)  kubelet          Node ha-797954-m04 has been rebooted, boot id: 517f7625-9dbd-4fa7-9ee7-6452f1cfd966
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m48s (x4 over 2m48s)  kubelet          Node ha-797954-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x4 over 2m48s)  kubelet          Node ha-797954-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x4 over 2m48s)  kubelet          Node ha-797954-m04 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             2m48s                  kubelet          Node ha-797954-m04 status is now: NodeNotReady
	  Normal   NodeReady                2m48s (x2 over 2m48s)  kubelet          Node ha-797954-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 3m41s)   node-controller  Node ha-797954-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.856812] systemd-fstab-generator[588]: Ignoring "noauto" option for root device
	[  +0.057896] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.062395] systemd-fstab-generator[600]: Ignoring "noauto" option for root device
	[  +0.182824] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.137335] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.286470] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +4.136041] systemd-fstab-generator[754]: Ignoring "noauto" option for root device
	[  +4.054578] systemd-fstab-generator[890]: Ignoring "noauto" option for root device
	[  +0.065217] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.071912] systemd-fstab-generator[1308]: Ignoring "noauto" option for root device
	[  +0.088568] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.488875] kauditd_printk_skb: 21 callbacks suppressed
	[ +12.290079] kauditd_printk_skb: 38 callbacks suppressed
	[Sep20 21:16] kauditd_printk_skb: 26 callbacks suppressed
	[Sep20 21:25] systemd-fstab-generator[3525]: Ignoring "noauto" option for root device
	[  +0.161736] systemd-fstab-generator[3537]: Ignoring "noauto" option for root device
	[  +0.184396] systemd-fstab-generator[3551]: Ignoring "noauto" option for root device
	[  +0.151239] systemd-fstab-generator[3563]: Ignoring "noauto" option for root device
	[  +0.285569] systemd-fstab-generator[3591]: Ignoring "noauto" option for root device
	[  +2.898817] systemd-fstab-generator[3686]: Ignoring "noauto" option for root device
	[  +1.562216] kauditd_printk_skb: 122 callbacks suppressed
	[  +5.093490] kauditd_printk_skb: 1 callbacks suppressed
	[Sep20 21:26] kauditd_printk_skb: 87 callbacks suppressed
	[ +39.173141] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [613c4c00096cdb3abb542165657b473fbd4e44c8270253a9ca66ab5cb51ff14e] <==
	2024/09/20 21:24:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2024/09/20 21:24:16 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-20T21:24:17.037225Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.240:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T21:24:17.037384Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.240:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T21:24:17.037596Z","caller":"etcdserver/server.go:1512","msg":"skipped leadership transfer; local server is not leader","local-member-id":"1cdefa49b8abbef9","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-09-20T21:24:17.037762Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.037805Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.037831Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.037894Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.037990Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.038061Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.038075Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c769175a379ffa3e"}
	{"level":"info","ts":"2024-09-20T21:24:17.038084Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.038092Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.038131Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.038195Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.038240Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.038275Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.038303Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:24:17.042595Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"warn","ts":"2024-09-20T21:24:17.042688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"9.29968663s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: server stopped"}
	{"level":"info","ts":"2024-09-20T21:24:17.042812Z","caller":"traceutil/trace.go:171","msg":"trace[364636347] range","detail":"{range_begin:; range_end:; }","duration":"9.299824074s","start":"2024-09-20T21:24:07.742973Z","end":"2024-09-20T21:24:17.042797Z","steps":["trace[364636347] 'agreement among raft nodes before linearized reading'  (duration: 9.299683567s)"],"step_count":1}
	{"level":"error","ts":"2024-09-20T21:24:17.042851Z","caller":"etcdhttp/health.go:367","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: server stopped\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHttpEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:367\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2141\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2519\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:2943\nnet/http.(*conn).serve\n\tnet/http/server.go:2014"}
	{"level":"info","ts":"2024-09-20T21:24:17.043640Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.240:2380"}
	{"level":"info","ts":"2024-09-20T21:24:17.043660Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"ha-797954","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.240:2380"],"advertise-client-urls":["https://192.168.39.240:2379"]}
	
	
	==> etcd [bd2b308172eed90bdd95877e35af5cedf81ab84ef5922e67df30c5085ea7fee3] <==
	{"level":"info","ts":"2024-09-20T21:27:36.030428Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:27:36.030935Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:27:36.055918Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1cdefa49b8abbef9","to":"582d2db74829ae92","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-20T21:27:36.055978Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:27:36.057429Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1cdefa49b8abbef9","to":"582d2db74829ae92","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-20T21:27:36.057453Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:28:26.161556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1cdefa49b8abbef9 switched to configuration voters=(2080375272429567737 14369041762350004798)"}
	{"level":"info","ts":"2024-09-20T21:28:26.164216Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"e0745912b0778b6e","local-member-id":"1cdefa49b8abbef9","removed-remote-peer-id":"582d2db74829ae92","removed-remote-peer-urls":["https://192.168.39.108:2380"]}
	{"level":"info","ts":"2024-09-20T21:28:26.164572Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"582d2db74829ae92"}
	{"level":"warn","ts":"2024-09-20T21:28:26.164929Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:28:26.165121Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"582d2db74829ae92"}
	{"level":"warn","ts":"2024-09-20T21:28:26.165551Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:28:26.165728Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"582d2db74829ae92"}
	{"level":"warn","ts":"2024-09-20T21:28:26.164633Z","caller":"etcdserver/server.go:987","msg":"rejected Raft message from removed member","local-member-id":"1cdefa49b8abbef9","removed-member-id":"582d2db74829ae92"}
	{"level":"warn","ts":"2024-09-20T21:28:26.165897Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-09-20T21:28:26.165866Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"warn","ts":"2024-09-20T21:28:26.166193Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92","error":"context canceled"}
	{"level":"warn","ts":"2024-09-20T21:28:26.166259Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"582d2db74829ae92","error":"failed to read 582d2db74829ae92 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-20T21:28:26.166341Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"warn","ts":"2024-09-20T21:28:26.166643Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92","error":"context canceled"}
	{"level":"info","ts":"2024-09-20T21:28:26.167180Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"1cdefa49b8abbef9","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:28:26.167403Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"582d2db74829ae92"}
	{"level":"info","ts":"2024-09-20T21:28:26.167626Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"1cdefa49b8abbef9","removed-remote-peer-id":"582d2db74829ae92"}
	{"level":"warn","ts":"2024-09-20T21:28:26.180413Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"1cdefa49b8abbef9","remote-peer-id-stream-handler":"1cdefa49b8abbef9","remote-peer-id-from":"582d2db74829ae92"}
	{"level":"warn","ts":"2024-09-20T21:28:26.184685Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"1cdefa49b8abbef9","remote-peer-id-stream-handler":"1cdefa49b8abbef9","remote-peer-id-from":"582d2db74829ae92"}
	
	
	==> kernel <==
	 21:31:00 up 15 min,  0 users,  load average: 0.23, 0.53, 0.44
	Linux ha-797954 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [c5ae7c3c8fc8a02e9ccbadaccb0e916ad225a3d05c746d2c55e00cf8fb447e78] <==
	I0920 21:30:20.545205       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:30:30.539423       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:30:30.539667       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:30:30.539875       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:30:30.539903       1 main.go:299] handling current node
	I0920 21:30:30.539936       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:30:30.539953       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:30:40.544727       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:30:40.544901       1 main.go:299] handling current node
	I0920 21:30:40.544933       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:30:40.544939       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:30:40.545193       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:30:40.545217       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:30:50.543594       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:30:50.543715       1 main.go:299] handling current node
	I0920 21:30:50.543745       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:30:50.543765       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:30:50.543937       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:30:50.543967       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:31:00.534884       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:31:00.534915       1 main.go:299] handling current node
	I0920 21:31:00.534928       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:31:00.534932       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:31:00.535030       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:31:00.535036       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d34d703bb77f7f220923b6408f18e4c7290b613361f81bccb1f5198cf8ab53f2] <==
	I0920 21:23:37.972785       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:23:47.970974       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:23:47.971078       1 main.go:299] handling current node
	I0920 21:23:47.971162       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:23:47.971186       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:23:47.971335       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:23:47.971358       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:23:47.971442       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:23:47.971573       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:23:57.973347       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:23:57.973397       1 main.go:299] handling current node
	I0920 21:23:57.973410       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:23:57.973416       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:23:57.973658       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:23:57.973700       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:23:57.973811       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:23:57.973833       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	I0920 21:24:07.971140       1 main.go:295] Handling node with IPs: map[192.168.39.240:{}]
	I0920 21:24:07.971258       1 main.go:299] handling current node
	I0920 21:24:07.971301       1 main.go:295] Handling node with IPs: map[192.168.39.63:{}]
	I0920 21:24:07.971320       1 main.go:322] Node ha-797954-m02 has CIDR [10.244.1.0/24] 
	I0920 21:24:07.971541       1 main.go:295] Handling node with IPs: map[192.168.39.108:{}]
	I0920 21:24:07.971574       1 main.go:322] Node ha-797954-m03 has CIDR [10.244.2.0/24] 
	I0920 21:24:07.971647       1 main.go:295] Handling node with IPs: map[192.168.39.210:{}]
	I0920 21:24:07.971666       1 main.go:322] Node ha-797954-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [9acf7608b1c89f78f8a30b337bbab3b8983f18408bff7726791dd7fea2c966dd] <==
	I0920 21:26:41.157159       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 21:26:41.169439       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 21:26:41.184808       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 21:26:41.188009       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 21:26:41.188024       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 21:26:41.188154       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 21:26:41.188239       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 21:26:41.188263       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 21:26:41.188323       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 21:26:41.188357       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 21:26:41.192800       1 aggregator.go:171] initial CRD sync complete...
	I0920 21:26:41.192885       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 21:26:41.192894       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 21:26:41.192901       1 cache.go:39] Caches are synced for autoregister controller
	I0920 21:26:41.198428       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 21:26:41.198448       1 policy_source.go:224] refreshing policies
	I0920 21:26:41.229308       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	W0920 21:26:41.230202       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.108 192.168.39.63]
	I0920 21:26:41.232865       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 21:26:41.253454       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0920 21:26:41.265893       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0920 21:26:42.060879       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0920 21:26:42.783985       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.108 192.168.39.240 192.168.39.63]
	W0920 21:26:52.793149       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.240 192.168.39.63]
	W0920 21:28:42.786612       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.240 192.168.39.63]
	
	
	==> kube-apiserver [b6084f1904ab2d42dd56300e192a8932b203f4274d48b0ea823a48acc3ea3419] <==
	I0920 21:25:59.429382       1 options.go:228] external host was not specified, using 192.168.39.240
	I0920 21:25:59.433715       1 server.go:142] Version: v1.31.1
	I0920 21:25:59.433762       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:25:59.995671       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0920 21:26:00.045756       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 21:26:00.056940       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0920 21:26:00.056977       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0920 21:26:00.057329       1 instance.go:232] Using reconciler: lease
	W0920 21:26:19.988446       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0920 21:26:19.990953       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0920 21:26:20.058873       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0920 21:26:20.058996       1 instance.go:225] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [0d38acbb961f53cbc2e1941f21bfada4d86ed3343c926314f7f73e63dad87760] <==
	I0920 21:25:59.905886       1 serving.go:386] Generated self-signed cert in-memory
	I0920 21:26:00.614025       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0920 21:26:00.614169       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:26:00.615828       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0920 21:26:00.616599       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0920 21:26:00.616857       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 21:26:00.616967       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0920 21:26:21.067690       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.240:8443/healthz\": dial tcp 192.168.39.240:8443: connect: connection refused"
	
	
	==> kube-controller-manager [63327c191bb5e5a2d1a93611fcf6ae00e68dbd669c5c13f30880ddd4120905d5] <==
	I0920 21:28:24.832060       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="57.153µs"
	I0920 21:28:24.938127       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.161µs"
	I0920 21:28:25.053717       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.557µs"
	I0920 21:28:25.063036       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="60.664µs"
	I0920 21:28:26.822595       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.307136ms"
	I0920 21:28:26.822727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="67.567µs"
	I0920 21:28:37.233729       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-797954-m04"
	I0920 21:28:37.233845       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m03"
	E0920 21:28:37.295670       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-797954-m03\", UID:\"abdee187-93c7-4270-bd9b-6ed3c6589acc\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32
{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-797954-m03\", UID:\"f2778944-4f3b-4081-a9da-d139a29e0625\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-797954-m03\" not found" logger="UnhandledError"
	E0920 21:28:45.546815       1 gc_controller.go:151] "Failed to get node" err="node \"ha-797954-m03\" not found" logger="pod-garbage-collector-controller" node="ha-797954-m03"
	E0920 21:28:45.546876       1 gc_controller.go:151] "Failed to get node" err="node \"ha-797954-m03\" not found" logger="pod-garbage-collector-controller" node="ha-797954-m03"
	E0920 21:28:45.546891       1 gc_controller.go:151] "Failed to get node" err="node \"ha-797954-m03\" not found" logger="pod-garbage-collector-controller" node="ha-797954-m03"
	E0920 21:28:45.546906       1 gc_controller.go:151] "Failed to get node" err="node \"ha-797954-m03\" not found" logger="pod-garbage-collector-controller" node="ha-797954-m03"
	E0920 21:28:45.546912       1 gc_controller.go:151] "Failed to get node" err="node \"ha-797954-m03\" not found" logger="pod-garbage-collector-controller" node="ha-797954-m03"
	E0920 21:29:05.547882       1 gc_controller.go:151] "Failed to get node" err="node \"ha-797954-m03\" not found" logger="pod-garbage-collector-controller" node="ha-797954-m03"
	E0920 21:29:05.547992       1 gc_controller.go:151] "Failed to get node" err="node \"ha-797954-m03\" not found" logger="pod-garbage-collector-controller" node="ha-797954-m03"
	E0920 21:29:05.548018       1 gc_controller.go:151] "Failed to get node" err="node \"ha-797954-m03\" not found" logger="pod-garbage-collector-controller" node="ha-797954-m03"
	E0920 21:29:05.548054       1 gc_controller.go:151] "Failed to get node" err="node \"ha-797954-m03\" not found" logger="pod-garbage-collector-controller" node="ha-797954-m03"
	E0920 21:29:05.548077       1 gc_controller.go:151] "Failed to get node" err="node \"ha-797954-m03\" not found" logger="pod-garbage-collector-controller" node="ha-797954-m03"
	I0920 21:29:14.579532       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:29:14.609853       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:29:14.668907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.34789ms"
	I0920 21:29:14.669286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="82.854µs"
	I0920 21:29:15.679369       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	I0920 21:29:19.736280       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-797954-m04"
	
	
	==> kube-proxy [4945a06bdca1b2b38dab867c97225dae17f62ecd3304af5f57cb805cbf890ccd] <==
	E0920 21:26:04.344667       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-797954\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 21:26:07.417378       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-797954\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 21:26:13.561561       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-797954\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 21:26:22.776771       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-797954\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0920 21:26:41.207962       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-797954\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0920 21:26:41.208065       1 server.go:646] "Can't determine this node's IP, assuming loopback; if this is incorrect, please set the --bind-address flag"
	E0920 21:26:41.208162       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 21:26:41.327737       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 21:26:41.327803       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 21:26:41.327839       1 server_linux.go:169] "Using iptables Proxier"
	I0920 21:26:41.331141       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 21:26:41.331757       1 server.go:483] "Version info" version="v1.31.1"
	I0920 21:26:41.331962       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:26:41.337045       1 config.go:199] "Starting service config controller"
	I0920 21:26:41.337129       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 21:26:41.337199       1 config.go:105] "Starting endpoint slice config controller"
	I0920 21:26:41.337308       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 21:26:41.341128       1 config.go:328] "Starting node config controller"
	I0920 21:26:41.341188       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 21:26:41.438243       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 21:26:41.438412       1 shared_informer.go:320] Caches are synced for service config
	I0920 21:26:41.441433       1 shared_informer.go:320] Caches are synced for node config
	W0920 21:29:26.296985       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0920 21:29:26.296985       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0920 21:29:26.297163       1 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-proxy [6367518948d4f3bc218a70b7f879daebb6d8fd2d6fd60fb6468fda7fd05e6d0f] <==
	E0920 21:23:06.172135       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:09.242104       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:09.242492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:09.242637       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:09.242695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:09.242853       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:09.242930       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:15.386427       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:15.386568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:15.386856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:15.387025       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:15.387987       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:15.388191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:24.602178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:24.602250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:27.672440       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:27.672590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:27.672735       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:27.672755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:49.176094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:49.176214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:49.176390       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:49.178433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get \"https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=1704\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	W0920 21:23:52.248893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698": dial tcp 192.168.39.254:8443: connect: no route to host
	E0920 21:23:52.249256       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dha-797954&resourceVersion=1698\": dial tcp 192.168.39.254:8443: connect: no route to host" logger="UnhandledError"
	
	
	==> kube-scheduler [34e481bafb0d8683614d59e2b460c50f05fb034def4820d78ae0ef17e63e4463] <==
	E0920 21:18:56.228063       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvv9\": pod kube-proxy-ghvv9 is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-ghvv9" node="ha-797954-m04"
	E0920 21:18:56.234373       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 586765d9-0991-47c1-aa4f-7c82cb8a2e76(kube-system/kube-proxy-ghvv9) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-ghvv9"
	E0920 21:18:56.234534       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-ghvv9\": pod kube-proxy-ghvv9 is already assigned to node \"ha-797954-m04\"" pod="kube-system/kube-proxy-ghvv9"
	I0920 21:18:56.234608       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ghvv9" node="ha-797954-m04"
	E0920 21:18:56.279871       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4kct8\": pod kindnet-4kct8 is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4kct8" node="ha-797954-m04"
	E0920 21:18:56.280008       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4kct8\": pod kindnet-4kct8 is already assigned to node \"ha-797954-m04\"" pod="kube-system/kindnet-4kct8"
	E0920 21:18:56.280247       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-dpjmv\": pod kube-proxy-dpjmv is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-dpjmv" node="ha-797954-m04"
	E0920 21:18:56.280445       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-dpjmv\": pod kube-proxy-dpjmv is already assigned to node \"ha-797954-m04\"" pod="kube-system/kube-proxy-dpjmv"
	E0920 21:18:56.401044       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-r4kzr\": pod kindnet-r4kzr is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-r4kzr" node="ha-797954-m04"
	E0920 21:18:56.401344       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod e53077ca-c9f3-433c-8f93-766336e4a9ab(kube-system/kindnet-r4kzr) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-r4kzr"
	E0920 21:18:56.401460       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-r4kzr\": pod kindnet-r4kzr is already assigned to node \"ha-797954-m04\"" pod="kube-system/kindnet-r4kzr"
	I0920 21:18:56.401669       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-r4kzr" node="ha-797954-m04"
	E0920 21:24:08.140549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0920 21:24:08.634304       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)" logger="UnhandledError"
	E0920 21:24:09.111123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0920 21:24:09.759332       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0920 21:24:11.657436       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: unknown (get persistentvolumeclaims)" logger="UnhandledError"
	E0920 21:24:11.960214       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0920 21:24:12.261833       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0920 21:24:13.719425       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0920 21:24:13.802712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: unknown (get nodes)" logger="UnhandledError"
	E0920 21:24:13.844534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0920 21:24:14.171986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0920 21:24:15.531125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: unknown (get services)" logger="UnhandledError"
	E0920 21:24:16.940022       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8d34f2b26411d6ad84d8e5b7e9a72a27925db5d7eb2207539073fc7cafe36005] <==
	W0920 21:26:37.169381       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:37.169538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.39.240:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:37.465392       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.240:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:37.465598       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.39.240:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:37.525008       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.39.240:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:37.525106       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://192.168.39.240:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:37.917370       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.39.240:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:37.917574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get \"https://192.168.39.240:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:38.416159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.39.240:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:38.416231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.39.240:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:38.423124       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.240:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:38.423206       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get \"https://192.168.39.240:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:38.869594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.39.240:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:38.869666       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get \"https://192.168.39.240:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:39.190993       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.39.240:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.240:8443: connect: connection refused
	E0920 21:26:39.191055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.39.240:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.39.240:8443: connect: connection refused" logger="UnhandledError"
	W0920 21:26:41.090431       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 21:26:41.090541       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 21:26:41.091142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 21:26:41.091278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 21:26:41.092671       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 21:26:41.092714       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0920 21:27:03.880120       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 21:28:22.858040       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kc2r9\": pod busybox-7dff88458-kc2r9 is already assigned to node \"ha-797954-m04\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-kc2r9" node="ha-797954-m04"
	E0920 21:28:22.858660       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-kc2r9\": pod busybox-7dff88458-kc2r9 is already assigned to node \"ha-797954-m04\"" pod="default/busybox-7dff88458-kc2r9"
	
	
	==> kubelet <==
	Sep 20 21:29:41 ha-797954 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 21:29:41 ha-797954 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 21:29:41 ha-797954 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:29:41 ha-797954 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:29:41 ha-797954 kubelet[1315]: E0920 21:29:41.412016    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867781411608097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:29:41 ha-797954 kubelet[1315]: E0920 21:29:41.412092    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867781411608097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:29:51 ha-797954 kubelet[1315]: E0920 21:29:51.413906    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867791413554145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:29:51 ha-797954 kubelet[1315]: E0920 21:29:51.413955    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867791413554145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:30:01 ha-797954 kubelet[1315]: E0920 21:30:01.415958    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867801415552632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:30:01 ha-797954 kubelet[1315]: E0920 21:30:01.416239    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867801415552632,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:30:11 ha-797954 kubelet[1315]: E0920 21:30:11.419556    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867811418347443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:30:11 ha-797954 kubelet[1315]: E0920 21:30:11.420009    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867811418347443,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:30:21 ha-797954 kubelet[1315]: E0920 21:30:21.422789    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867821422283854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:30:21 ha-797954 kubelet[1315]: E0920 21:30:21.423111    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867821422283854,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:30:31 ha-797954 kubelet[1315]: E0920 21:30:31.426104    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867831425179931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:30:31 ha-797954 kubelet[1315]: E0920 21:30:31.426167    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867831425179931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:30:41 ha-797954 kubelet[1315]: E0920 21:30:41.144773    1315 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 21:30:41 ha-797954 kubelet[1315]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 21:30:41 ha-797954 kubelet[1315]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 21:30:41 ha-797954 kubelet[1315]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:30:41 ha-797954 kubelet[1315]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:30:41 ha-797954 kubelet[1315]: E0920 21:30:41.428354    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867841427869664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:30:41 ha-797954 kubelet[1315]: E0920 21:30:41.428383    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867841427869664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:30:51 ha-797954 kubelet[1315]: E0920 21:30:51.430451    1315 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867851430232197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:30:51 ha-797954 kubelet[1315]: E0920 21:30:51.430590    1315 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726867851430232197,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146316,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 21:30:59.372284   37998 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19672-9614/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-797954 -n ha-797954
helpers_test.go:261: (dbg) Run:  kubectl --context ha-797954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.91s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (331.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-900777
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-900777
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-900777: exit status 82 (2m1.882141203s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-900777-m03"  ...
	* Stopping node "multinode-900777-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-900777" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-900777 --wait=true -v=8 --alsologtostderr
E0920 21:49:11.809444   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:50:09.132463   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-900777 --wait=true -v=8 --alsologtostderr: (3m27.729798275s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-900777
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-900777 -n multinode-900777
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-900777 logs -n 25: (1.5100858s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-900777 cp multinode-900777-m02:/home/docker/cp-test.txt                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile8418019/001/cp-test_multinode-900777-m02.txt            |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-900777 cp multinode-900777-m02:/home/docker/cp-test.txt                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777:/home/docker/cp-test_multinode-900777-m02_multinode-900777.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n multinode-900777 sudo cat                                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-900777-m02_multinode-900777.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-900777 cp multinode-900777-m02:/home/docker/cp-test.txt                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m03:/home/docker/cp-test_multinode-900777-m02_multinode-900777-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n multinode-900777-m03 sudo cat                                   | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-900777-m02_multinode-900777-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-900777 cp testdata/cp-test.txt                                                | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-900777 cp multinode-900777-m03:/home/docker/cp-test.txt                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile8418019/001/cp-test_multinode-900777-m03.txt            |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-900777 cp multinode-900777-m03:/home/docker/cp-test.txt                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777:/home/docker/cp-test_multinode-900777-m03_multinode-900777.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n multinode-900777 sudo cat                                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-900777-m03_multinode-900777.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-900777 cp multinode-900777-m03:/home/docker/cp-test.txt                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m02:/home/docker/cp-test_multinode-900777-m03_multinode-900777-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n multinode-900777-m02 sudo cat                                   | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-900777-m03_multinode-900777-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-900777 node stop m03                                                          | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	| node    | multinode-900777 node start                                                             | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:46 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-900777                                                                | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:46 UTC |                     |
	| stop    | -p multinode-900777                                                                     | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:46 UTC |                     |
	| start   | -p multinode-900777                                                                     | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:48 UTC | 20 Sep 24 21:51 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-900777                                                                | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:51 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 21:48:20
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 21:48:20.784538   47651 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:48:20.784664   47651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:48:20.784672   47651 out.go:358] Setting ErrFile to fd 2...
	I0920 21:48:20.784676   47651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:48:20.784852   47651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 21:48:20.785392   47651 out.go:352] Setting JSON to false
	I0920 21:48:20.786260   47651 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5449,"bootTime":1726863452,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:48:20.786352   47651 start.go:139] virtualization: kvm guest
	I0920 21:48:20.792463   47651 out.go:177] * [multinode-900777] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 21:48:20.797343   47651 notify.go:220] Checking for updates...
	I0920 21:48:20.802650   47651 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:48:20.807172   47651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:48:20.808415   47651 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:48:20.809673   47651 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:48:20.810913   47651 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:48:20.812162   47651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:48:20.814081   47651 config.go:182] Loaded profile config "multinode-900777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:48:20.814211   47651 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:48:20.814906   47651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:48:20.814979   47651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:48:20.830028   47651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0920 21:48:20.830394   47651 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:48:20.830916   47651 main.go:141] libmachine: Using API Version  1
	I0920 21:48:20.830936   47651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:48:20.831282   47651 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:48:20.831453   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:48:20.866926   47651 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 21:48:20.868110   47651 start.go:297] selected driver: kvm2
	I0920 21:48:20.868122   47651 start.go:901] validating driver "kvm2" against &{Name:multinode-900777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-900777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:48:20.868252   47651 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:48:20.868550   47651 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:48:20.868612   47651 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 21:48:20.883758   47651 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 21:48:20.884452   47651 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 21:48:20.884481   47651 cni.go:84] Creating CNI manager for ""
	I0920 21:48:20.884537   47651 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 21:48:20.884590   47651 start.go:340] cluster config:
	{Name:multinode-900777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-900777 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:48:20.884708   47651 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:48:20.887095   47651 out.go:177] * Starting "multinode-900777" primary control-plane node in "multinode-900777" cluster
	I0920 21:48:20.888661   47651 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:48:20.888696   47651 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 21:48:20.888703   47651 cache.go:56] Caching tarball of preloaded images
	I0920 21:48:20.888801   47651 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 21:48:20.888815   47651 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 21:48:20.888929   47651 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/config.json ...
	I0920 21:48:20.889131   47651 start.go:360] acquireMachinesLock for multinode-900777: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 21:48:20.889182   47651 start.go:364] duration metric: took 24.733µs to acquireMachinesLock for "multinode-900777"
	I0920 21:48:20.889201   47651 start.go:96] Skipping create...Using existing machine configuration
	I0920 21:48:20.889210   47651 fix.go:54] fixHost starting: 
	I0920 21:48:20.889491   47651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:48:20.889528   47651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:48:20.904368   47651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I0920 21:48:20.904967   47651 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:48:20.905494   47651 main.go:141] libmachine: Using API Version  1
	I0920 21:48:20.905514   47651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:48:20.905919   47651 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:48:20.906152   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:48:20.906361   47651 main.go:141] libmachine: (multinode-900777) Calling .GetState
	I0920 21:48:20.908349   47651 fix.go:112] recreateIfNeeded on multinode-900777: state=Running err=<nil>
	W0920 21:48:20.908372   47651 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 21:48:20.910449   47651 out.go:177] * Updating the running kvm2 "multinode-900777" VM ...
	I0920 21:48:20.911796   47651 machine.go:93] provisionDockerMachine start ...
	I0920 21:48:20.911822   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:48:20.912022   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:48:20.914661   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:20.915209   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:20.915264   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:20.915440   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:48:20.915621   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:20.915778   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:20.915913   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:48:20.916076   47651 main.go:141] libmachine: Using SSH client type: native
	I0920 21:48:20.916257   47651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0920 21:48:20.916269   47651 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 21:48:21.037220   47651 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-900777
	
	I0920 21:48:21.037249   47651 main.go:141] libmachine: (multinode-900777) Calling .GetMachineName
	I0920 21:48:21.037496   47651 buildroot.go:166] provisioning hostname "multinode-900777"
	I0920 21:48:21.037521   47651 main.go:141] libmachine: (multinode-900777) Calling .GetMachineName
	I0920 21:48:21.037675   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:48:21.040370   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.040761   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:21.040785   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.041000   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:48:21.041170   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.041347   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.041479   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:48:21.041633   47651 main.go:141] libmachine: Using SSH client type: native
	I0920 21:48:21.041794   47651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0920 21:48:21.041807   47651 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-900777 && echo "multinode-900777" | sudo tee /etc/hostname
	I0920 21:48:21.175986   47651 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-900777
	
	I0920 21:48:21.176017   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:48:21.178717   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.179086   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:21.179115   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.179320   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:48:21.179456   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.179549   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.179703   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:48:21.179939   47651 main.go:141] libmachine: Using SSH client type: native
	I0920 21:48:21.180150   47651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0920 21:48:21.180167   47651 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-900777' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-900777/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-900777' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 21:48:21.296471   47651 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:48:21.296505   47651 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 21:48:21.296542   47651 buildroot.go:174] setting up certificates
	I0920 21:48:21.296551   47651 provision.go:84] configureAuth start
	I0920 21:48:21.296562   47651 main.go:141] libmachine: (multinode-900777) Calling .GetMachineName
	I0920 21:48:21.296863   47651 main.go:141] libmachine: (multinode-900777) Calling .GetIP
	I0920 21:48:21.299333   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.299678   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:21.299703   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.299864   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:48:21.302094   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.302453   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:21.302482   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.302602   47651 provision.go:143] copyHostCerts
	I0920 21:48:21.302632   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:48:21.302672   47651 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 21:48:21.302681   47651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:48:21.302762   47651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 21:48:21.302879   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:48:21.302908   47651 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 21:48:21.302917   47651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:48:21.302957   47651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 21:48:21.303055   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:48:21.303072   47651 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 21:48:21.303079   47651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:48:21.303106   47651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 21:48:21.303260   47651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.multinode-900777 san=[127.0.0.1 192.168.39.150 localhost minikube multinode-900777]
	I0920 21:48:21.635775   47651 provision.go:177] copyRemoteCerts
	I0920 21:48:21.635831   47651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 21:48:21.635855   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:48:21.638820   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.639275   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:21.639305   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.639482   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:48:21.639637   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.639790   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:48:21.639945   47651 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/multinode-900777/id_rsa Username:docker}
	I0920 21:48:21.729689   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 21:48:21.729775   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 21:48:21.758179   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 21:48:21.758242   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0920 21:48:21.782574   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 21:48:21.782641   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 21:48:21.807415   47651 provision.go:87] duration metric: took 510.850527ms to configureAuth
	I0920 21:48:21.807444   47651 buildroot.go:189] setting minikube options for container-runtime
	I0920 21:48:21.807710   47651 config.go:182] Loaded profile config "multinode-900777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:48:21.807785   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:48:21.810282   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.810624   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:21.810643   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.810805   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:48:21.811017   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.811170   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.811292   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:48:21.811452   47651 main.go:141] libmachine: Using SSH client type: native
	I0920 21:48:21.811616   47651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0920 21:48:21.811631   47651 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 21:49:52.548542   47651 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 21:49:52.548572   47651 machine.go:96] duration metric: took 1m31.636756344s to provisionDockerMachine
	I0920 21:49:52.548586   47651 start.go:293] postStartSetup for "multinode-900777" (driver="kvm2")
	I0920 21:49:52.548600   47651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 21:49:52.548620   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:49:52.548899   47651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 21:49:52.548929   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:49:52.552016   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.552433   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:49:52.552452   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.552684   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:49:52.552864   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:49:52.553033   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:49:52.553176   47651 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/multinode-900777/id_rsa Username:docker}
	I0920 21:49:52.643405   47651 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 21:49:52.648061   47651 command_runner.go:130] > NAME=Buildroot
	I0920 21:49:52.648087   47651 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0920 21:49:52.648091   47651 command_runner.go:130] > ID=buildroot
	I0920 21:49:52.648097   47651 command_runner.go:130] > VERSION_ID=2023.02.9
	I0920 21:49:52.648102   47651 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0920 21:49:52.648155   47651 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 21:49:52.648170   47651 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 21:49:52.648239   47651 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 21:49:52.648327   47651 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 21:49:52.648336   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /etc/ssl/certs/167852.pem
	I0920 21:49:52.648432   47651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 21:49:52.658589   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:49:52.686272   47651 start.go:296] duration metric: took 137.67397ms for postStartSetup
	I0920 21:49:52.686308   47651 fix.go:56] duration metric: took 1m31.797098284s for fixHost
	I0920 21:49:52.686327   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:49:52.688869   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.689389   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:49:52.689420   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.689543   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:49:52.689752   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:49:52.689967   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:49:52.690175   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:49:52.690373   47651 main.go:141] libmachine: Using SSH client type: native
	I0920 21:49:52.690572   47651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0920 21:49:52.690587   47651 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 21:49:52.804190   47651 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726868992.772975243
	
	I0920 21:49:52.804216   47651 fix.go:216] guest clock: 1726868992.772975243
	I0920 21:49:52.804224   47651 fix.go:229] Guest: 2024-09-20 21:49:52.772975243 +0000 UTC Remote: 2024-09-20 21:49:52.686311783 +0000 UTC m=+91.937147563 (delta=86.66346ms)
	I0920 21:49:52.804253   47651 fix.go:200] guest clock delta is within tolerance: 86.66346ms
	I0920 21:49:52.804258   47651 start.go:83] releasing machines lock for "multinode-900777", held for 1m31.915064728s
	I0920 21:49:52.804275   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:49:52.804528   47651 main.go:141] libmachine: (multinode-900777) Calling .GetIP
	I0920 21:49:52.807221   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.807615   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:49:52.807643   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.807835   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:49:52.808365   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:49:52.808513   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:49:52.808606   47651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 21:49:52.808651   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:49:52.808755   47651 ssh_runner.go:195] Run: cat /version.json
	I0920 21:49:52.808779   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:49:52.811371   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.811561   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.811740   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:49:52.811769   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.811925   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:49:52.811957   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.811939   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:49:52.812148   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:49:52.812181   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:49:52.812319   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:49:52.812340   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:49:52.812470   47651 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/multinode-900777/id_rsa Username:docker}
	I0920 21:49:52.812632   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:49:52.812783   47651 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/multinode-900777/id_rsa Username:docker}
	I0920 21:49:52.896347   47651 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0920 21:49:52.896702   47651 ssh_runner.go:195] Run: systemctl --version
	I0920 21:49:52.926088   47651 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0920 21:49:52.926765   47651 command_runner.go:130] > systemd 252 (252)
	I0920 21:49:52.926806   47651 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0920 21:49:52.926883   47651 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 21:49:53.088950   47651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 21:49:53.095570   47651 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0920 21:49:53.095961   47651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 21:49:53.096040   47651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 21:49:53.105797   47651 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 21:49:53.105822   47651 start.go:495] detecting cgroup driver to use...
	I0920 21:49:53.105890   47651 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 21:49:53.123258   47651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 21:49:53.140103   47651 docker.go:217] disabling cri-docker service (if available) ...
	I0920 21:49:53.140167   47651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 21:49:53.155312   47651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 21:49:53.169497   47651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 21:49:53.319742   47651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 21:49:53.467637   47651 docker.go:233] disabling docker service ...
	I0920 21:49:53.467719   47651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 21:49:53.485181   47651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 21:49:53.500565   47651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 21:49:53.650384   47651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 21:49:53.798749   47651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 21:49:53.813840   47651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 21:49:53.833484   47651 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0920 21:49:53.833756   47651 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 21:49:53.833825   47651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.845284   47651 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 21:49:53.845378   47651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.856891   47651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.869921   47651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.882686   47651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 21:49:53.897199   47651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.910301   47651 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.922227   47651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.935444   47651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 21:49:53.948038   47651 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0920 21:49:53.948104   47651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 21:49:53.960626   47651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:49:54.109666   47651 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 21:50:03.438119   47651 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.328416548s)
	I0920 21:50:03.438152   47651 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 21:50:03.438197   47651 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 21:50:03.444113   47651 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0920 21:50:03.444141   47651 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0920 21:50:03.444156   47651 command_runner.go:130] > Device: 0,22	Inode: 1339        Links: 1
	I0920 21:50:03.444165   47651 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 21:50:03.444173   47651 command_runner.go:130] > Access: 2024-09-20 21:50:03.284951290 +0000
	I0920 21:50:03.444206   47651 command_runner.go:130] > Modify: 2024-09-20 21:50:03.284951290 +0000
	I0920 21:50:03.444218   47651 command_runner.go:130] > Change: 2024-09-20 21:50:03.284951290 +0000
	I0920 21:50:03.444224   47651 command_runner.go:130] >  Birth: -
	I0920 21:50:03.444246   47651 start.go:563] Will wait 60s for crictl version
	I0920 21:50:03.444288   47651 ssh_runner.go:195] Run: which crictl
	I0920 21:50:03.448294   47651 command_runner.go:130] > /usr/bin/crictl
	I0920 21:50:03.448362   47651 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 21:50:03.488510   47651 command_runner.go:130] > Version:  0.1.0
	I0920 21:50:03.488536   47651 command_runner.go:130] > RuntimeName:  cri-o
	I0920 21:50:03.488540   47651 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0920 21:50:03.488545   47651 command_runner.go:130] > RuntimeApiVersion:  v1
	I0920 21:50:03.489715   47651 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 21:50:03.489792   47651 ssh_runner.go:195] Run: crio --version
	I0920 21:50:03.522583   47651 command_runner.go:130] > crio version 1.29.1
	I0920 21:50:03.522612   47651 command_runner.go:130] > Version:        1.29.1
	I0920 21:50:03.522619   47651 command_runner.go:130] > GitCommit:      unknown
	I0920 21:50:03.522624   47651 command_runner.go:130] > GitCommitDate:  unknown
	I0920 21:50:03.522628   47651 command_runner.go:130] > GitTreeState:   clean
	I0920 21:50:03.522634   47651 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0920 21:50:03.522639   47651 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 21:50:03.522642   47651 command_runner.go:130] > Compiler:       gc
	I0920 21:50:03.522647   47651 command_runner.go:130] > Platform:       linux/amd64
	I0920 21:50:03.522651   47651 command_runner.go:130] > Linkmode:       dynamic
	I0920 21:50:03.522655   47651 command_runner.go:130] > BuildTags:      
	I0920 21:50:03.522659   47651 command_runner.go:130] >   containers_image_ostree_stub
	I0920 21:50:03.522663   47651 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 21:50:03.522668   47651 command_runner.go:130] >   btrfs_noversion
	I0920 21:50:03.522675   47651 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 21:50:03.522682   47651 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 21:50:03.522688   47651 command_runner.go:130] >   seccomp
	I0920 21:50:03.522695   47651 command_runner.go:130] > LDFlags:          unknown
	I0920 21:50:03.522703   47651 command_runner.go:130] > SeccompEnabled:   true
	I0920 21:50:03.522710   47651 command_runner.go:130] > AppArmorEnabled:  false
	I0920 21:50:03.522852   47651 ssh_runner.go:195] Run: crio --version
	I0920 21:50:03.552126   47651 command_runner.go:130] > crio version 1.29.1
	I0920 21:50:03.552159   47651 command_runner.go:130] > Version:        1.29.1
	I0920 21:50:03.552167   47651 command_runner.go:130] > GitCommit:      unknown
	I0920 21:50:03.552178   47651 command_runner.go:130] > GitCommitDate:  unknown
	I0920 21:50:03.552184   47651 command_runner.go:130] > GitTreeState:   clean
	I0920 21:50:03.552196   47651 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0920 21:50:03.552200   47651 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 21:50:03.552204   47651 command_runner.go:130] > Compiler:       gc
	I0920 21:50:03.552208   47651 command_runner.go:130] > Platform:       linux/amd64
	I0920 21:50:03.552212   47651 command_runner.go:130] > Linkmode:       dynamic
	I0920 21:50:03.552217   47651 command_runner.go:130] > BuildTags:      
	I0920 21:50:03.552221   47651 command_runner.go:130] >   containers_image_ostree_stub
	I0920 21:50:03.552226   47651 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 21:50:03.552231   47651 command_runner.go:130] >   btrfs_noversion
	I0920 21:50:03.552235   47651 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 21:50:03.552239   47651 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 21:50:03.552242   47651 command_runner.go:130] >   seccomp
	I0920 21:50:03.552246   47651 command_runner.go:130] > LDFlags:          unknown
	I0920 21:50:03.552250   47651 command_runner.go:130] > SeccompEnabled:   true
	I0920 21:50:03.552255   47651 command_runner.go:130] > AppArmorEnabled:  false
	I0920 21:50:03.555719   47651 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 21:50:03.556988   47651 main.go:141] libmachine: (multinode-900777) Calling .GetIP
	I0920 21:50:03.560063   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:50:03.560454   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:50:03.560481   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:50:03.560734   47651 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 21:50:03.565628   47651 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0920 21:50:03.565733   47651 kubeadm.go:883] updating cluster {Name:multinode-900777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-900777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 21:50:03.565873   47651 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:50:03.565927   47651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:50:03.612451   47651 command_runner.go:130] > {
	I0920 21:50:03.612471   47651 command_runner.go:130] >   "images": [
	I0920 21:50:03.612475   47651 command_runner.go:130] >     {
	I0920 21:50:03.612482   47651 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 21:50:03.612487   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.612492   47651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 21:50:03.612495   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612499   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.612510   47651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 21:50:03.612521   47651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 21:50:03.612527   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612533   47651 command_runner.go:130] >       "size": "87190579",
	I0920 21:50:03.612543   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.612548   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.612558   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.612568   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.612574   47651 command_runner.go:130] >     },
	I0920 21:50:03.612578   47651 command_runner.go:130] >     {
	I0920 21:50:03.612584   47651 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 21:50:03.612590   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.612595   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 21:50:03.612599   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612604   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.612613   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 21:50:03.612620   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 21:50:03.612626   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612633   47651 command_runner.go:130] >       "size": "1363676",
	I0920 21:50:03.612642   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.612653   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.612665   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.612672   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.612675   47651 command_runner.go:130] >     },
	I0920 21:50:03.612679   47651 command_runner.go:130] >     {
	I0920 21:50:03.612684   47651 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 21:50:03.612691   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.612696   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 21:50:03.612702   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612706   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.612716   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 21:50:03.612731   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 21:50:03.612740   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612747   47651 command_runner.go:130] >       "size": "31470524",
	I0920 21:50:03.612759   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.612773   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.612782   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.612791   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.612794   47651 command_runner.go:130] >     },
	I0920 21:50:03.612800   47651 command_runner.go:130] >     {
	I0920 21:50:03.612806   47651 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 21:50:03.612812   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.612817   47651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 21:50:03.612823   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612829   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.612845   47651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 21:50:03.612896   47651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 21:50:03.612905   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612909   47651 command_runner.go:130] >       "size": "63273227",
	I0920 21:50:03.612915   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.612921   47651 command_runner.go:130] >       "username": "nonroot",
	I0920 21:50:03.612941   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.612952   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.612961   47651 command_runner.go:130] >     },
	I0920 21:50:03.612976   47651 command_runner.go:130] >     {
	I0920 21:50:03.612989   47651 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 21:50:03.613005   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.613014   47651 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 21:50:03.613022   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613031   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.613043   47651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 21:50:03.613058   47651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 21:50:03.613066   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613076   47651 command_runner.go:130] >       "size": "149009664",
	I0920 21:50:03.613084   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.613093   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.613100   47651 command_runner.go:130] >       },
	I0920 21:50:03.613104   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.613113   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.613121   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.613127   47651 command_runner.go:130] >     },
	I0920 21:50:03.613135   47651 command_runner.go:130] >     {
	I0920 21:50:03.613147   47651 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 21:50:03.613156   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.613167   47651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 21:50:03.613176   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613185   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.613196   47651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 21:50:03.613208   47651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 21:50:03.613218   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613224   47651 command_runner.go:130] >       "size": "95237600",
	I0920 21:50:03.613232   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.613242   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.613250   47651 command_runner.go:130] >       },
	I0920 21:50:03.613259   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.613268   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.613277   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.613291   47651 command_runner.go:130] >     },
	I0920 21:50:03.613299   47651 command_runner.go:130] >     {
	I0920 21:50:03.613310   47651 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 21:50:03.613322   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.613334   47651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 21:50:03.613342   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613351   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.613366   47651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 21:50:03.613381   47651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 21:50:03.613388   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613393   47651 command_runner.go:130] >       "size": "89437508",
	I0920 21:50:03.613401   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.613411   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.613419   47651 command_runner.go:130] >       },
	I0920 21:50:03.613428   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.613436   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.613446   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.613453   47651 command_runner.go:130] >     },
	I0920 21:50:03.613459   47651 command_runner.go:130] >     {
	I0920 21:50:03.613471   47651 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 21:50:03.613477   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.613483   47651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 21:50:03.613491   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613497   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.613519   47651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 21:50:03.613534   47651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 21:50:03.613543   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613549   47651 command_runner.go:130] >       "size": "92733849",
	I0920 21:50:03.613558   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.613565   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.613573   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.613578   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.613581   47651 command_runner.go:130] >     },
	I0920 21:50:03.613585   47651 command_runner.go:130] >     {
	I0920 21:50:03.613594   47651 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 21:50:03.613602   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.613610   47651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 21:50:03.613615   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613622   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.613636   47651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 21:50:03.613648   47651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 21:50:03.613653   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613660   47651 command_runner.go:130] >       "size": "68420934",
	I0920 21:50:03.613663   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.613667   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.613670   47651 command_runner.go:130] >       },
	I0920 21:50:03.613676   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.613682   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.613688   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.613694   47651 command_runner.go:130] >     },
	I0920 21:50:03.613700   47651 command_runner.go:130] >     {
	I0920 21:50:03.613711   47651 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 21:50:03.613720   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.613728   47651 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 21:50:03.613736   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613743   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.613753   47651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 21:50:03.613762   47651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 21:50:03.613770   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613777   47651 command_runner.go:130] >       "size": "742080",
	I0920 21:50:03.613786   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.613793   47651 command_runner.go:130] >         "value": "65535"
	I0920 21:50:03.613801   47651 command_runner.go:130] >       },
	I0920 21:50:03.613807   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.613818   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.613827   47651 command_runner.go:130] >       "pinned": true
	I0920 21:50:03.613835   47651 command_runner.go:130] >     }
	I0920 21:50:03.613839   47651 command_runner.go:130] >   ]
	I0920 21:50:03.613843   47651 command_runner.go:130] > }
	I0920 21:50:03.614023   47651 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 21:50:03.614035   47651 crio.go:433] Images already preloaded, skipping extraction
	I0920 21:50:03.614085   47651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:50:03.648366   47651 command_runner.go:130] > {
	I0920 21:50:03.648390   47651 command_runner.go:130] >   "images": [
	I0920 21:50:03.648394   47651 command_runner.go:130] >     {
	I0920 21:50:03.648402   47651 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 21:50:03.648406   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.648412   47651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 21:50:03.648416   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648420   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.648428   47651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 21:50:03.648438   47651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 21:50:03.648446   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648453   47651 command_runner.go:130] >       "size": "87190579",
	I0920 21:50:03.648462   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.648471   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.648496   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.648506   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.648511   47651 command_runner.go:130] >     },
	I0920 21:50:03.648516   47651 command_runner.go:130] >     {
	I0920 21:50:03.648524   47651 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 21:50:03.648533   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.648540   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 21:50:03.648545   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648553   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.648564   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 21:50:03.648577   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 21:50:03.648585   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648594   47651 command_runner.go:130] >       "size": "1363676",
	I0920 21:50:03.648605   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.648613   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.648622   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.648631   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.648638   47651 command_runner.go:130] >     },
	I0920 21:50:03.648647   47651 command_runner.go:130] >     {
	I0920 21:50:03.648657   47651 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 21:50:03.648663   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.648668   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 21:50:03.648674   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648678   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.648687   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 21:50:03.648697   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 21:50:03.648703   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648707   47651 command_runner.go:130] >       "size": "31470524",
	I0920 21:50:03.648713   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.648718   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.648724   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.648727   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.648733   47651 command_runner.go:130] >     },
	I0920 21:50:03.648737   47651 command_runner.go:130] >     {
	I0920 21:50:03.648743   47651 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 21:50:03.648750   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.648762   47651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 21:50:03.648766   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648769   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.648778   47651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 21:50:03.648790   47651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 21:50:03.648796   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648801   47651 command_runner.go:130] >       "size": "63273227",
	I0920 21:50:03.648807   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.648811   47651 command_runner.go:130] >       "username": "nonroot",
	I0920 21:50:03.648821   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.648828   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.648832   47651 command_runner.go:130] >     },
	I0920 21:50:03.648837   47651 command_runner.go:130] >     {
	I0920 21:50:03.648843   47651 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 21:50:03.648848   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.648853   47651 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 21:50:03.648858   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648862   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.648870   47651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 21:50:03.648877   47651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 21:50:03.648882   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648887   47651 command_runner.go:130] >       "size": "149009664",
	I0920 21:50:03.648892   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.648896   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.648902   47651 command_runner.go:130] >       },
	I0920 21:50:03.648905   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.648912   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.648916   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.648922   47651 command_runner.go:130] >     },
	I0920 21:50:03.648925   47651 command_runner.go:130] >     {
	I0920 21:50:03.648933   47651 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 21:50:03.648938   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.648944   47651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 21:50:03.648949   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648953   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.648963   47651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 21:50:03.648972   47651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 21:50:03.648978   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648982   47651 command_runner.go:130] >       "size": "95237600",
	I0920 21:50:03.648988   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.648991   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.648997   47651 command_runner.go:130] >       },
	I0920 21:50:03.649003   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.649010   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.649014   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.649020   47651 command_runner.go:130] >     },
	I0920 21:50:03.649024   47651 command_runner.go:130] >     {
	I0920 21:50:03.649032   47651 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 21:50:03.649035   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.649046   47651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 21:50:03.649052   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649056   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.649065   47651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 21:50:03.649075   47651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 21:50:03.649084   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649090   47651 command_runner.go:130] >       "size": "89437508",
	I0920 21:50:03.649094   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.649099   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.649102   47651 command_runner.go:130] >       },
	I0920 21:50:03.649106   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.649110   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.649115   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.649120   47651 command_runner.go:130] >     },
	I0920 21:50:03.649123   47651 command_runner.go:130] >     {
	I0920 21:50:03.649129   47651 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 21:50:03.649135   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.649139   47651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 21:50:03.649145   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649149   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.649164   47651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 21:50:03.649173   47651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 21:50:03.649178   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649182   47651 command_runner.go:130] >       "size": "92733849",
	I0920 21:50:03.649188   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.649192   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.649198   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.649202   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.649207   47651 command_runner.go:130] >     },
	I0920 21:50:03.649210   47651 command_runner.go:130] >     {
	I0920 21:50:03.649216   47651 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 21:50:03.649222   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.649227   47651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 21:50:03.649232   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649236   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.649246   47651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 21:50:03.649262   47651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 21:50:03.649268   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649272   47651 command_runner.go:130] >       "size": "68420934",
	I0920 21:50:03.649277   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.649281   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.649285   47651 command_runner.go:130] >       },
	I0920 21:50:03.649291   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.649294   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.649298   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.649303   47651 command_runner.go:130] >     },
	I0920 21:50:03.649307   47651 command_runner.go:130] >     {
	I0920 21:50:03.649315   47651 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 21:50:03.649321   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.649325   47651 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 21:50:03.649330   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649334   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.649343   47651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 21:50:03.649355   47651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 21:50:03.649360   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649364   47651 command_runner.go:130] >       "size": "742080",
	I0920 21:50:03.649369   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.649374   47651 command_runner.go:130] >         "value": "65535"
	I0920 21:50:03.649379   47651 command_runner.go:130] >       },
	I0920 21:50:03.649383   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.649389   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.649394   47651 command_runner.go:130] >       "pinned": true
	I0920 21:50:03.649399   47651 command_runner.go:130] >     }
	I0920 21:50:03.649403   47651 command_runner.go:130] >   ]
	I0920 21:50:03.649408   47651 command_runner.go:130] > }
	I0920 21:50:03.649549   47651 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 21:50:03.649562   47651 cache_images.go:84] Images are preloaded, skipping loading
	I0920 21:50:03.649569   47651 kubeadm.go:934] updating node { 192.168.39.150 8443 v1.31.1 crio true true} ...
	I0920 21:50:03.649656   47651 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-900777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-900777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 21:50:03.649723   47651 ssh_runner.go:195] Run: crio config
	I0920 21:50:03.685818   47651 command_runner.go:130] ! time="2024-09-20 21:50:03.654073911Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0920 21:50:03.691420   47651 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0920 21:50:03.698856   47651 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0920 21:50:03.698886   47651 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0920 21:50:03.698894   47651 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0920 21:50:03.698898   47651 command_runner.go:130] > #
	I0920 21:50:03.698904   47651 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0920 21:50:03.698910   47651 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0920 21:50:03.698916   47651 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0920 21:50:03.698922   47651 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0920 21:50:03.698926   47651 command_runner.go:130] > # reload'.
	I0920 21:50:03.698932   47651 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0920 21:50:03.698938   47651 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0920 21:50:03.698944   47651 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0920 21:50:03.698949   47651 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0920 21:50:03.698976   47651 command_runner.go:130] > [crio]
	I0920 21:50:03.698990   47651 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0920 21:50:03.699001   47651 command_runner.go:130] > # containers images, in this directory.
	I0920 21:50:03.699011   47651 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0920 21:50:03.699019   47651 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0920 21:50:03.699025   47651 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0920 21:50:03.699032   47651 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0920 21:50:03.699039   47651 command_runner.go:130] > # imagestore = ""
	I0920 21:50:03.699045   47651 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0920 21:50:03.699053   47651 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0920 21:50:03.699057   47651 command_runner.go:130] > storage_driver = "overlay"
	I0920 21:50:03.699063   47651 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0920 21:50:03.699069   47651 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0920 21:50:03.699073   47651 command_runner.go:130] > storage_option = [
	I0920 21:50:03.699078   47651 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0920 21:50:03.699083   47651 command_runner.go:130] > ]
	I0920 21:50:03.699089   47651 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0920 21:50:03.699095   47651 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0920 21:50:03.699102   47651 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0920 21:50:03.699108   47651 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0920 21:50:03.699116   47651 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0920 21:50:03.699121   47651 command_runner.go:130] > # always happen on a node reboot
	I0920 21:50:03.699125   47651 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0920 21:50:03.699136   47651 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0920 21:50:03.699143   47651 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0920 21:50:03.699149   47651 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0920 21:50:03.699154   47651 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0920 21:50:03.699164   47651 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0920 21:50:03.699173   47651 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0920 21:50:03.699177   47651 command_runner.go:130] > # internal_wipe = true
	I0920 21:50:03.699187   47651 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0920 21:50:03.699194   47651 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0920 21:50:03.699198   47651 command_runner.go:130] > # internal_repair = false
	I0920 21:50:03.699203   47651 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0920 21:50:03.699215   47651 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0920 21:50:03.699223   47651 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0920 21:50:03.699230   47651 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0920 21:50:03.699239   47651 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0920 21:50:03.699245   47651 command_runner.go:130] > [crio.api]
	I0920 21:50:03.699250   47651 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0920 21:50:03.699257   47651 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0920 21:50:03.699262   47651 command_runner.go:130] > # IP address on which the stream server will listen.
	I0920 21:50:03.699267   47651 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0920 21:50:03.699274   47651 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0920 21:50:03.699280   47651 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0920 21:50:03.699284   47651 command_runner.go:130] > # stream_port = "0"
	I0920 21:50:03.699291   47651 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0920 21:50:03.699296   47651 command_runner.go:130] > # stream_enable_tls = false
	I0920 21:50:03.699304   47651 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0920 21:50:03.699310   47651 command_runner.go:130] > # stream_idle_timeout = ""
	I0920 21:50:03.699316   47651 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0920 21:50:03.699324   47651 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0920 21:50:03.699330   47651 command_runner.go:130] > # minutes.
	I0920 21:50:03.699334   47651 command_runner.go:130] > # stream_tls_cert = ""
	I0920 21:50:03.699341   47651 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0920 21:50:03.699347   47651 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0920 21:50:03.699353   47651 command_runner.go:130] > # stream_tls_key = ""
	I0920 21:50:03.699359   47651 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0920 21:50:03.699367   47651 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0920 21:50:03.699379   47651 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0920 21:50:03.699385   47651 command_runner.go:130] > # stream_tls_ca = ""
	I0920 21:50:03.699392   47651 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 21:50:03.699398   47651 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0920 21:50:03.699405   47651 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 21:50:03.699412   47651 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0920 21:50:03.699418   47651 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0920 21:50:03.699425   47651 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0920 21:50:03.699429   47651 command_runner.go:130] > [crio.runtime]
	I0920 21:50:03.699435   47651 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0920 21:50:03.699442   47651 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0920 21:50:03.699446   47651 command_runner.go:130] > # "nofile=1024:2048"
	I0920 21:50:03.699454   47651 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0920 21:50:03.699458   47651 command_runner.go:130] > # default_ulimits = [
	I0920 21:50:03.699464   47651 command_runner.go:130] > # ]
	I0920 21:50:03.699469   47651 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0920 21:50:03.699475   47651 command_runner.go:130] > # no_pivot = false
	I0920 21:50:03.699484   47651 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0920 21:50:03.699492   47651 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0920 21:50:03.699497   47651 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0920 21:50:03.699505   47651 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0920 21:50:03.699509   47651 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0920 21:50:03.699516   47651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 21:50:03.699521   47651 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0920 21:50:03.699528   47651 command_runner.go:130] > # Cgroup setting for conmon
	I0920 21:50:03.699534   47651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0920 21:50:03.699540   47651 command_runner.go:130] > conmon_cgroup = "pod"
	I0920 21:50:03.699546   47651 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0920 21:50:03.699553   47651 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0920 21:50:03.699559   47651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 21:50:03.699565   47651 command_runner.go:130] > conmon_env = [
	I0920 21:50:03.699571   47651 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 21:50:03.699576   47651 command_runner.go:130] > ]
	I0920 21:50:03.699581   47651 command_runner.go:130] > # Additional environment variables to set for all the
	I0920 21:50:03.699588   47651 command_runner.go:130] > # containers. These are overridden if set in the
	I0920 21:50:03.699593   47651 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0920 21:50:03.699599   47651 command_runner.go:130] > # default_env = [
	I0920 21:50:03.699602   47651 command_runner.go:130] > # ]
	I0920 21:50:03.699608   47651 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0920 21:50:03.699617   47651 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0920 21:50:03.699624   47651 command_runner.go:130] > # selinux = false
	I0920 21:50:03.699630   47651 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0920 21:50:03.699638   47651 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0920 21:50:03.699646   47651 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0920 21:50:03.699650   47651 command_runner.go:130] > # seccomp_profile = ""
	I0920 21:50:03.699656   47651 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0920 21:50:03.699664   47651 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0920 21:50:03.699671   47651 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0920 21:50:03.699677   47651 command_runner.go:130] > # which might increase security.
	I0920 21:50:03.699684   47651 command_runner.go:130] > # This option is currently deprecated,
	I0920 21:50:03.699690   47651 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0920 21:50:03.699696   47651 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0920 21:50:03.699701   47651 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0920 21:50:03.699709   47651 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0920 21:50:03.699718   47651 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0920 21:50:03.699726   47651 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0920 21:50:03.699733   47651 command_runner.go:130] > # This option supports live configuration reload.
	I0920 21:50:03.699737   47651 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0920 21:50:03.699745   47651 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0920 21:50:03.699751   47651 command_runner.go:130] > # the cgroup blockio controller.
	I0920 21:50:03.699757   47651 command_runner.go:130] > # blockio_config_file = ""
	I0920 21:50:03.699765   47651 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0920 21:50:03.699771   47651 command_runner.go:130] > # blockio parameters.
	I0920 21:50:03.699775   47651 command_runner.go:130] > # blockio_reload = false
	I0920 21:50:03.699781   47651 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0920 21:50:03.699787   47651 command_runner.go:130] > # irqbalance daemon.
	I0920 21:50:03.699792   47651 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0920 21:50:03.699800   47651 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0920 21:50:03.699809   47651 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0920 21:50:03.699817   47651 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0920 21:50:03.699823   47651 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0920 21:50:03.699831   47651 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0920 21:50:03.699837   47651 command_runner.go:130] > # This option supports live configuration reload.
	I0920 21:50:03.699841   47651 command_runner.go:130] > # rdt_config_file = ""
	I0920 21:50:03.699848   47651 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0920 21:50:03.699853   47651 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0920 21:50:03.699869   47651 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0920 21:50:03.699876   47651 command_runner.go:130] > # separate_pull_cgroup = ""
	I0920 21:50:03.699882   47651 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0920 21:50:03.699891   47651 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0920 21:50:03.699896   47651 command_runner.go:130] > # will be added.
	I0920 21:50:03.699900   47651 command_runner.go:130] > # default_capabilities = [
	I0920 21:50:03.699907   47651 command_runner.go:130] > # 	"CHOWN",
	I0920 21:50:03.699911   47651 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0920 21:50:03.699917   47651 command_runner.go:130] > # 	"FSETID",
	I0920 21:50:03.699920   47651 command_runner.go:130] > # 	"FOWNER",
	I0920 21:50:03.699926   47651 command_runner.go:130] > # 	"SETGID",
	I0920 21:50:03.699929   47651 command_runner.go:130] > # 	"SETUID",
	I0920 21:50:03.699935   47651 command_runner.go:130] > # 	"SETPCAP",
	I0920 21:50:03.699939   47651 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0920 21:50:03.699946   47651 command_runner.go:130] > # 	"KILL",
	I0920 21:50:03.699949   47651 command_runner.go:130] > # ]
	I0920 21:50:03.699956   47651 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0920 21:50:03.699968   47651 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0920 21:50:03.699974   47651 command_runner.go:130] > # add_inheritable_capabilities = false
	I0920 21:50:03.699983   47651 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0920 21:50:03.699991   47651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 21:50:03.699996   47651 command_runner.go:130] > default_sysctls = [
	I0920 21:50:03.700003   47651 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0920 21:50:03.700006   47651 command_runner.go:130] > ]
	I0920 21:50:03.700011   47651 command_runner.go:130] > # List of devices on the host that a
	I0920 21:50:03.700019   47651 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0920 21:50:03.700024   47651 command_runner.go:130] > # allowed_devices = [
	I0920 21:50:03.700028   47651 command_runner.go:130] > # 	"/dev/fuse",
	I0920 21:50:03.700034   47651 command_runner.go:130] > # ]
	I0920 21:50:03.700038   47651 command_runner.go:130] > # List of additional devices. specified as
	I0920 21:50:03.700047   47651 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0920 21:50:03.700055   47651 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0920 21:50:03.700068   47651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 21:50:03.700075   47651 command_runner.go:130] > # additional_devices = [
	I0920 21:50:03.700079   47651 command_runner.go:130] > # ]
	I0920 21:50:03.700086   47651 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0920 21:50:03.700093   47651 command_runner.go:130] > # cdi_spec_dirs = [
	I0920 21:50:03.700097   47651 command_runner.go:130] > # 	"/etc/cdi",
	I0920 21:50:03.700103   47651 command_runner.go:130] > # 	"/var/run/cdi",
	I0920 21:50:03.700106   47651 command_runner.go:130] > # ]
	I0920 21:50:03.700112   47651 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0920 21:50:03.700120   47651 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0920 21:50:03.700125   47651 command_runner.go:130] > # Defaults to false.
	I0920 21:50:03.700130   47651 command_runner.go:130] > # device_ownership_from_security_context = false
	I0920 21:50:03.700138   47651 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0920 21:50:03.700146   47651 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0920 21:50:03.700152   47651 command_runner.go:130] > # hooks_dir = [
	I0920 21:50:03.700156   47651 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0920 21:50:03.700162   47651 command_runner.go:130] > # ]
	I0920 21:50:03.700168   47651 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0920 21:50:03.700176   47651 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0920 21:50:03.700183   47651 command_runner.go:130] > # its default mounts from the following two files:
	I0920 21:50:03.700186   47651 command_runner.go:130] > #
	I0920 21:50:03.700192   47651 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0920 21:50:03.700200   47651 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0920 21:50:03.700213   47651 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0920 21:50:03.700218   47651 command_runner.go:130] > #
	I0920 21:50:03.700223   47651 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0920 21:50:03.700232   47651 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0920 21:50:03.700240   47651 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0920 21:50:03.700249   47651 command_runner.go:130] > #      only add mounts it finds in this file.
	I0920 21:50:03.700255   47651 command_runner.go:130] > #
	I0920 21:50:03.700259   47651 command_runner.go:130] > # default_mounts_file = ""
	I0920 21:50:03.700268   47651 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0920 21:50:03.700274   47651 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0920 21:50:03.700281   47651 command_runner.go:130] > pids_limit = 1024
	I0920 21:50:03.700287   47651 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0920 21:50:03.700295   47651 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0920 21:50:03.700301   47651 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0920 21:50:03.700311   47651 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0920 21:50:03.700317   47651 command_runner.go:130] > # log_size_max = -1
	I0920 21:50:03.700324   47651 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0920 21:50:03.700330   47651 command_runner.go:130] > # log_to_journald = false
	I0920 21:50:03.700338   47651 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0920 21:50:03.700346   47651 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0920 21:50:03.700351   47651 command_runner.go:130] > # Path to directory for container attach sockets.
	I0920 21:50:03.700358   47651 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0920 21:50:03.700363   47651 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0920 21:50:03.700370   47651 command_runner.go:130] > # bind_mount_prefix = ""
	I0920 21:50:03.700375   47651 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0920 21:50:03.700381   47651 command_runner.go:130] > # read_only = false
	I0920 21:50:03.700387   47651 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0920 21:50:03.700395   47651 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0920 21:50:03.700399   47651 command_runner.go:130] > # live configuration reload.
	I0920 21:50:03.700403   47651 command_runner.go:130] > # log_level = "info"
	I0920 21:50:03.700411   47651 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0920 21:50:03.700418   47651 command_runner.go:130] > # This option supports live configuration reload.
	I0920 21:50:03.700422   47651 command_runner.go:130] > # log_filter = ""
	I0920 21:50:03.700427   47651 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0920 21:50:03.700436   47651 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0920 21:50:03.700440   47651 command_runner.go:130] > # separated by comma.
	I0920 21:50:03.700449   47651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 21:50:03.700453   47651 command_runner.go:130] > # uid_mappings = ""
	I0920 21:50:03.700459   47651 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0920 21:50:03.700473   47651 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0920 21:50:03.700478   47651 command_runner.go:130] > # separated by comma.
	I0920 21:50:03.700485   47651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 21:50:03.700494   47651 command_runner.go:130] > # gid_mappings = ""
	I0920 21:50:03.700501   47651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0920 21:50:03.700510   47651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 21:50:03.700516   47651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 21:50:03.700525   47651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 21:50:03.700531   47651 command_runner.go:130] > # minimum_mappable_uid = -1
	I0920 21:50:03.700537   47651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0920 21:50:03.700545   47651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 21:50:03.700550   47651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 21:50:03.700559   47651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 21:50:03.700565   47651 command_runner.go:130] > # minimum_mappable_gid = -1
	I0920 21:50:03.700571   47651 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0920 21:50:03.700579   47651 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0920 21:50:03.700592   47651 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0920 21:50:03.700598   47651 command_runner.go:130] > # ctr_stop_timeout = 30
	I0920 21:50:03.700603   47651 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0920 21:50:03.700611   47651 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0920 21:50:03.700616   47651 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0920 21:50:03.700620   47651 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0920 21:50:03.700626   47651 command_runner.go:130] > drop_infra_ctr = false
	I0920 21:50:03.700633   47651 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0920 21:50:03.700640   47651 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0920 21:50:03.700648   47651 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0920 21:50:03.700654   47651 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0920 21:50:03.700661   47651 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0920 21:50:03.700668   47651 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0920 21:50:03.700676   47651 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0920 21:50:03.700681   47651 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0920 21:50:03.700688   47651 command_runner.go:130] > # shared_cpuset = ""
	I0920 21:50:03.700694   47651 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0920 21:50:03.700701   47651 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0920 21:50:03.700705   47651 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0920 21:50:03.700714   47651 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0920 21:50:03.700720   47651 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0920 21:50:03.700727   47651 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0920 21:50:03.700737   47651 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0920 21:50:03.700743   47651 command_runner.go:130] > # enable_criu_support = false
	I0920 21:50:03.700748   47651 command_runner.go:130] > # Enable/disable the generation of the container,
	I0920 21:50:03.700755   47651 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0920 21:50:03.700761   47651 command_runner.go:130] > # enable_pod_events = false
	I0920 21:50:03.700767   47651 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 21:50:03.700776   47651 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 21:50:03.700784   47651 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0920 21:50:03.700788   47651 command_runner.go:130] > # default_runtime = "runc"
	I0920 21:50:03.700793   47651 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0920 21:50:03.700800   47651 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0920 21:50:03.700810   47651 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0920 21:50:03.700817   47651 command_runner.go:130] > # creation as a file is not desired either.
	I0920 21:50:03.700825   47651 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0920 21:50:03.700832   47651 command_runner.go:130] > # the hostname is being managed dynamically.
	I0920 21:50:03.700836   47651 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0920 21:50:03.700840   47651 command_runner.go:130] > # ]
	I0920 21:50:03.700846   47651 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0920 21:50:03.700854   47651 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0920 21:50:03.700862   47651 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0920 21:50:03.700866   47651 command_runner.go:130] > # Each entry in the table should follow the format:
	I0920 21:50:03.700872   47651 command_runner.go:130] > #
	I0920 21:50:03.700876   47651 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0920 21:50:03.700883   47651 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0920 21:50:03.700901   47651 command_runner.go:130] > # runtime_type = "oci"
	I0920 21:50:03.700908   47651 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0920 21:50:03.700913   47651 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0920 21:50:03.700919   47651 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0920 21:50:03.700924   47651 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0920 21:50:03.700930   47651 command_runner.go:130] > # monitor_env = []
	I0920 21:50:03.700934   47651 command_runner.go:130] > # privileged_without_host_devices = false
	I0920 21:50:03.700940   47651 command_runner.go:130] > # allowed_annotations = []
	I0920 21:50:03.700946   47651 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0920 21:50:03.700951   47651 command_runner.go:130] > # Where:
	I0920 21:50:03.700957   47651 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0920 21:50:03.700966   47651 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0920 21:50:03.700975   47651 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0920 21:50:03.700982   47651 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0920 21:50:03.700991   47651 command_runner.go:130] > #   in $PATH.
	I0920 21:50:03.701001   47651 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0920 21:50:03.701007   47651 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0920 21:50:03.701014   47651 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0920 21:50:03.701019   47651 command_runner.go:130] > #   state.
	I0920 21:50:03.701025   47651 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0920 21:50:03.701033   47651 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0920 21:50:03.701042   47651 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0920 21:50:03.701047   47651 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0920 21:50:03.701055   47651 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0920 21:50:03.701062   47651 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0920 21:50:03.701069   47651 command_runner.go:130] > #   The currently recognized values are:
	I0920 21:50:03.701075   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0920 21:50:03.701084   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0920 21:50:03.701092   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0920 21:50:03.701099   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0920 21:50:03.701108   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0920 21:50:03.701114   47651 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0920 21:50:03.701122   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0920 21:50:03.701130   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0920 21:50:03.701138   47651 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0920 21:50:03.701146   47651 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0920 21:50:03.701152   47651 command_runner.go:130] > #   deprecated option "conmon".
	I0920 21:50:03.701158   47651 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0920 21:50:03.701165   47651 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0920 21:50:03.701171   47651 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0920 21:50:03.701178   47651 command_runner.go:130] > #   should be moved to the container's cgroup
	I0920 21:50:03.701185   47651 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0920 21:50:03.701192   47651 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0920 21:50:03.701198   47651 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0920 21:50:03.701203   47651 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0920 21:50:03.701211   47651 command_runner.go:130] > #
	I0920 21:50:03.701216   47651 command_runner.go:130] > # Using the seccomp notifier feature:
	I0920 21:50:03.701223   47651 command_runner.go:130] > #
	I0920 21:50:03.701229   47651 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0920 21:50:03.701237   47651 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0920 21:50:03.701242   47651 command_runner.go:130] > #
	I0920 21:50:03.701248   47651 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0920 21:50:03.701256   47651 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0920 21:50:03.701261   47651 command_runner.go:130] > #
	I0920 21:50:03.701267   47651 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0920 21:50:03.701272   47651 command_runner.go:130] > # feature.
	I0920 21:50:03.701275   47651 command_runner.go:130] > #
	I0920 21:50:03.701281   47651 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0920 21:50:03.701289   47651 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0920 21:50:03.701296   47651 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0920 21:50:03.701304   47651 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0920 21:50:03.701312   47651 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0920 21:50:03.701317   47651 command_runner.go:130] > #
	I0920 21:50:03.701323   47651 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0920 21:50:03.701330   47651 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0920 21:50:03.701336   47651 command_runner.go:130] > #
	I0920 21:50:03.701341   47651 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0920 21:50:03.701349   47651 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0920 21:50:03.701354   47651 command_runner.go:130] > #
	I0920 21:50:03.701360   47651 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0920 21:50:03.701367   47651 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0920 21:50:03.701371   47651 command_runner.go:130] > # limitation.
	I0920 21:50:03.701376   47651 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0920 21:50:03.701383   47651 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0920 21:50:03.701387   47651 command_runner.go:130] > runtime_type = "oci"
	I0920 21:50:03.701393   47651 command_runner.go:130] > runtime_root = "/run/runc"
	I0920 21:50:03.701397   47651 command_runner.go:130] > runtime_config_path = ""
	I0920 21:50:03.701403   47651 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0920 21:50:03.701408   47651 command_runner.go:130] > monitor_cgroup = "pod"
	I0920 21:50:03.701413   47651 command_runner.go:130] > monitor_exec_cgroup = ""
	I0920 21:50:03.701417   47651 command_runner.go:130] > monitor_env = [
	I0920 21:50:03.701425   47651 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 21:50:03.701431   47651 command_runner.go:130] > ]
	I0920 21:50:03.701436   47651 command_runner.go:130] > privileged_without_host_devices = false
	I0920 21:50:03.701442   47651 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0920 21:50:03.701449   47651 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0920 21:50:03.701454   47651 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0920 21:50:03.701462   47651 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0920 21:50:03.701473   47651 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0920 21:50:03.701480   47651 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0920 21:50:03.701489   47651 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0920 21:50:03.701498   47651 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0920 21:50:03.701504   47651 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0920 21:50:03.701513   47651 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0920 21:50:03.701516   47651 command_runner.go:130] > # Example:
	I0920 21:50:03.701521   47651 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0920 21:50:03.701528   47651 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0920 21:50:03.701533   47651 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0920 21:50:03.701537   47651 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0920 21:50:03.701542   47651 command_runner.go:130] > # cpuset = 0
	I0920 21:50:03.701546   47651 command_runner.go:130] > # cpushares = "0-1"
	I0920 21:50:03.701552   47651 command_runner.go:130] > # Where:
	I0920 21:50:03.701556   47651 command_runner.go:130] > # The workload name is workload-type.
	I0920 21:50:03.701565   47651 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0920 21:50:03.701572   47651 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0920 21:50:03.701577   47651 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0920 21:50:03.701586   47651 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0920 21:50:03.701594   47651 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0920 21:50:03.701600   47651 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0920 21:50:03.701606   47651 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0920 21:50:03.701612   47651 command_runner.go:130] > # Default value is set to true
	I0920 21:50:03.701617   47651 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0920 21:50:03.701624   47651 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0920 21:50:03.701630   47651 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0920 21:50:03.701634   47651 command_runner.go:130] > # Default value is set to 'false'
	I0920 21:50:03.701641   47651 command_runner.go:130] > # disable_hostport_mapping = false
	I0920 21:50:03.701647   47651 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0920 21:50:03.701650   47651 command_runner.go:130] > #
	I0920 21:50:03.701655   47651 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0920 21:50:03.701661   47651 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0920 21:50:03.701666   47651 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0920 21:50:03.701672   47651 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0920 21:50:03.701681   47651 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0920 21:50:03.701684   47651 command_runner.go:130] > [crio.image]
	I0920 21:50:03.701690   47651 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0920 21:50:03.701694   47651 command_runner.go:130] > # default_transport = "docker://"
	I0920 21:50:03.701699   47651 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0920 21:50:03.701705   47651 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0920 21:50:03.701708   47651 command_runner.go:130] > # global_auth_file = ""
	I0920 21:50:03.701713   47651 command_runner.go:130] > # The image used to instantiate infra containers.
	I0920 21:50:03.701717   47651 command_runner.go:130] > # This option supports live configuration reload.
	I0920 21:50:03.701721   47651 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0920 21:50:03.701727   47651 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0920 21:50:03.701732   47651 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0920 21:50:03.701736   47651 command_runner.go:130] > # This option supports live configuration reload.
	I0920 21:50:03.701740   47651 command_runner.go:130] > # pause_image_auth_file = ""
	I0920 21:50:03.701745   47651 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0920 21:50:03.701751   47651 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0920 21:50:03.701756   47651 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0920 21:50:03.701761   47651 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0920 21:50:03.701765   47651 command_runner.go:130] > # pause_command = "/pause"
	I0920 21:50:03.701770   47651 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0920 21:50:03.701776   47651 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0920 21:50:03.701780   47651 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0920 21:50:03.701787   47651 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0920 21:50:03.701792   47651 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0920 21:50:03.701797   47651 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0920 21:50:03.701801   47651 command_runner.go:130] > # pinned_images = [
	I0920 21:50:03.701804   47651 command_runner.go:130] > # ]
	I0920 21:50:03.701809   47651 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0920 21:50:03.701815   47651 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0920 21:50:03.701823   47651 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0920 21:50:03.701831   47651 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0920 21:50:03.701838   47651 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0920 21:50:03.701842   47651 command_runner.go:130] > # signature_policy = ""
	I0920 21:50:03.701847   47651 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0920 21:50:03.701868   47651 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0920 21:50:03.701881   47651 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0920 21:50:03.701893   47651 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0920 21:50:03.701901   47651 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0920 21:50:03.701906   47651 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0920 21:50:03.701914   47651 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0920 21:50:03.701922   47651 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0920 21:50:03.701926   47651 command_runner.go:130] > # changing them here.
	I0920 21:50:03.701930   47651 command_runner.go:130] > # insecure_registries = [
	I0920 21:50:03.701935   47651 command_runner.go:130] > # ]
	I0920 21:50:03.701941   47651 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0920 21:50:03.701948   47651 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0920 21:50:03.701952   47651 command_runner.go:130] > # image_volumes = "mkdir"
	I0920 21:50:03.701961   47651 command_runner.go:130] > # Temporary directory to use for storing big files
	I0920 21:50:03.701967   47651 command_runner.go:130] > # big_files_temporary_dir = ""
	I0920 21:50:03.701973   47651 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0920 21:50:03.701979   47651 command_runner.go:130] > # CNI plugins.
	I0920 21:50:03.701984   47651 command_runner.go:130] > [crio.network]
	I0920 21:50:03.701992   47651 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0920 21:50:03.702000   47651 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0920 21:50:03.702005   47651 command_runner.go:130] > # cni_default_network = ""
	I0920 21:50:03.702012   47651 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0920 21:50:03.702017   47651 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0920 21:50:03.702024   47651 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0920 21:50:03.702031   47651 command_runner.go:130] > # plugin_dirs = [
	I0920 21:50:03.702035   47651 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0920 21:50:03.702040   47651 command_runner.go:130] > # ]
	I0920 21:50:03.702045   47651 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0920 21:50:03.702050   47651 command_runner.go:130] > [crio.metrics]
	I0920 21:50:03.702055   47651 command_runner.go:130] > # Globally enable or disable metrics support.
	I0920 21:50:03.702061   47651 command_runner.go:130] > enable_metrics = true
	I0920 21:50:03.702066   47651 command_runner.go:130] > # Specify enabled metrics collectors.
	I0920 21:50:03.702072   47651 command_runner.go:130] > # Per default all metrics are enabled.
	I0920 21:50:03.702078   47651 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0920 21:50:03.702086   47651 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0920 21:50:03.702094   47651 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0920 21:50:03.702099   47651 command_runner.go:130] > # metrics_collectors = [
	I0920 21:50:03.702102   47651 command_runner.go:130] > # 	"operations",
	I0920 21:50:03.702107   47651 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0920 21:50:03.702113   47651 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0920 21:50:03.702117   47651 command_runner.go:130] > # 	"operations_errors",
	I0920 21:50:03.702124   47651 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0920 21:50:03.702128   47651 command_runner.go:130] > # 	"image_pulls_by_name",
	I0920 21:50:03.702134   47651 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0920 21:50:03.702141   47651 command_runner.go:130] > # 	"image_pulls_failures",
	I0920 21:50:03.702148   47651 command_runner.go:130] > # 	"image_pulls_successes",
	I0920 21:50:03.702151   47651 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0920 21:50:03.702157   47651 command_runner.go:130] > # 	"image_layer_reuse",
	I0920 21:50:03.702162   47651 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0920 21:50:03.702168   47651 command_runner.go:130] > # 	"containers_oom_total",
	I0920 21:50:03.702175   47651 command_runner.go:130] > # 	"containers_oom",
	I0920 21:50:03.702182   47651 command_runner.go:130] > # 	"processes_defunct",
	I0920 21:50:03.702186   47651 command_runner.go:130] > # 	"operations_total",
	I0920 21:50:03.702191   47651 command_runner.go:130] > # 	"operations_latency_seconds",
	I0920 21:50:03.702198   47651 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0920 21:50:03.702202   47651 command_runner.go:130] > # 	"operations_errors_total",
	I0920 21:50:03.702212   47651 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0920 21:50:03.702219   47651 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0920 21:50:03.702225   47651 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0920 21:50:03.702230   47651 command_runner.go:130] > # 	"image_pulls_success_total",
	I0920 21:50:03.702236   47651 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0920 21:50:03.702240   47651 command_runner.go:130] > # 	"containers_oom_count_total",
	I0920 21:50:03.702247   47651 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0920 21:50:03.702251   47651 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0920 21:50:03.702257   47651 command_runner.go:130] > # ]
	I0920 21:50:03.702262   47651 command_runner.go:130] > # The port on which the metrics server will listen.
	I0920 21:50:03.702268   47651 command_runner.go:130] > # metrics_port = 9090
	I0920 21:50:03.702273   47651 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0920 21:50:03.702279   47651 command_runner.go:130] > # metrics_socket = ""
	I0920 21:50:03.702284   47651 command_runner.go:130] > # The certificate for the secure metrics server.
	I0920 21:50:03.702292   47651 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0920 21:50:03.702298   47651 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0920 21:50:03.702304   47651 command_runner.go:130] > # certificate on any modification event.
	I0920 21:50:03.702308   47651 command_runner.go:130] > # metrics_cert = ""
	I0920 21:50:03.702315   47651 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0920 21:50:03.702320   47651 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0920 21:50:03.702326   47651 command_runner.go:130] > # metrics_key = ""
	I0920 21:50:03.702331   47651 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0920 21:50:03.702337   47651 command_runner.go:130] > [crio.tracing]
	I0920 21:50:03.702343   47651 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0920 21:50:03.702349   47651 command_runner.go:130] > # enable_tracing = false
	I0920 21:50:03.702355   47651 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0920 21:50:03.702361   47651 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0920 21:50:03.702369   47651 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0920 21:50:03.702375   47651 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0920 21:50:03.702379   47651 command_runner.go:130] > # CRI-O NRI configuration.
	I0920 21:50:03.702385   47651 command_runner.go:130] > [crio.nri]
	I0920 21:50:03.702389   47651 command_runner.go:130] > # Globally enable or disable NRI.
	I0920 21:50:03.702395   47651 command_runner.go:130] > # enable_nri = false
	I0920 21:50:03.702402   47651 command_runner.go:130] > # NRI socket to listen on.
	I0920 21:50:03.702408   47651 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0920 21:50:03.702412   47651 command_runner.go:130] > # NRI plugin directory to use.
	I0920 21:50:03.702417   47651 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0920 21:50:03.702424   47651 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0920 21:50:03.702428   47651 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0920 21:50:03.702436   47651 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0920 21:50:03.702441   47651 command_runner.go:130] > # nri_disable_connections = false
	I0920 21:50:03.702448   47651 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0920 21:50:03.702453   47651 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0920 21:50:03.702459   47651 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0920 21:50:03.702464   47651 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0920 21:50:03.702471   47651 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0920 21:50:03.702475   47651 command_runner.go:130] > [crio.stats]
	I0920 21:50:03.702480   47651 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0920 21:50:03.702487   47651 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0920 21:50:03.702490   47651 command_runner.go:130] > # stats_collection_period = 0
	I0920 21:50:03.702555   47651 cni.go:84] Creating CNI manager for ""
	I0920 21:50:03.702565   47651 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 21:50:03.702573   47651 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 21:50:03.702593   47651 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-900777 NodeName:multinode-900777 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 21:50:03.702713   47651 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-900777"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 21:50:03.702770   47651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 21:50:03.714062   47651 command_runner.go:130] > kubeadm
	I0920 21:50:03.714087   47651 command_runner.go:130] > kubectl
	I0920 21:50:03.714094   47651 command_runner.go:130] > kubelet
	I0920 21:50:03.714125   47651 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 21:50:03.714169   47651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 21:50:03.724319   47651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 21:50:03.742504   47651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 21:50:03.760213   47651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0920 21:50:03.779056   47651 ssh_runner.go:195] Run: grep 192.168.39.150	control-plane.minikube.internal$ /etc/hosts
	I0920 21:50:03.783055   47651 command_runner.go:130] > 192.168.39.150	control-plane.minikube.internal
	I0920 21:50:03.783122   47651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:50:03.926637   47651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:50:03.942713   47651 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777 for IP: 192.168.39.150
	I0920 21:50:03.942740   47651 certs.go:194] generating shared ca certs ...
	I0920 21:50:03.942760   47651 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:50:03.943088   47651 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 21:50:03.943169   47651 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 21:50:03.943186   47651 certs.go:256] generating profile certs ...
	I0920 21:50:03.943301   47651 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/client.key
	I0920 21:50:03.943376   47651 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/apiserver.key.b3bcb748
	I0920 21:50:03.943428   47651 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/proxy-client.key
	I0920 21:50:03.943441   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 21:50:03.943468   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 21:50:03.943487   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 21:50:03.943503   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 21:50:03.943521   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 21:50:03.943540   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 21:50:03.943556   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 21:50:03.943572   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 21:50:03.943638   47651 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 21:50:03.943678   47651 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 21:50:03.943693   47651 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 21:50:03.943730   47651 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 21:50:03.943762   47651 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 21:50:03.943799   47651 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 21:50:03.943853   47651 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:50:03.943897   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /usr/share/ca-certificates/167852.pem
	I0920 21:50:03.943918   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:50:03.943936   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem -> /usr/share/ca-certificates/16785.pem
	I0920 21:50:03.944532   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 21:50:03.972117   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 21:50:03.998011   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 21:50:04.023056   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 21:50:04.048420   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 21:50:04.074099   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 21:50:04.100445   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 21:50:04.126047   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 21:50:04.151031   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 21:50:04.177338   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 21:50:04.203759   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 21:50:04.229779   47651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 21:50:04.248051   47651 ssh_runner.go:195] Run: openssl version
	I0920 21:50:04.254129   47651 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0920 21:50:04.254313   47651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 21:50:04.265552   47651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 21:50:04.270197   47651 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:50:04.270348   47651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:50:04.270401   47651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 21:50:04.276947   47651 command_runner.go:130] > 51391683
	I0920 21:50:04.277112   47651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 21:50:04.286664   47651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 21:50:04.297835   47651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 21:50:04.302605   47651 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:50:04.302645   47651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:50:04.302683   47651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 21:50:04.308990   47651 command_runner.go:130] > 3ec20f2e
	I0920 21:50:04.309188   47651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 21:50:04.319386   47651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 21:50:04.330668   47651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:50:04.335269   47651 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:50:04.335304   47651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:50:04.335474   47651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:50:04.341501   47651 command_runner.go:130] > b5213941
	I0920 21:50:04.341644   47651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 21:50:04.351275   47651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:50:04.356095   47651 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:50:04.356116   47651 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0920 21:50:04.356123   47651 command_runner.go:130] > Device: 253,1	Inode: 3148840     Links: 1
	I0920 21:50:04.356133   47651 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 21:50:04.356146   47651 command_runner.go:130] > Access: 2024-09-20 21:43:13.373431049 +0000
	I0920 21:50:04.356155   47651 command_runner.go:130] > Modify: 2024-09-20 21:43:13.373431049 +0000
	I0920 21:50:04.356163   47651 command_runner.go:130] > Change: 2024-09-20 21:43:13.373431049 +0000
	I0920 21:50:04.356168   47651 command_runner.go:130] >  Birth: 2024-09-20 21:43:13.373431049 +0000
	I0920 21:50:04.356238   47651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 21:50:04.364433   47651 command_runner.go:130] > Certificate will not expire
	I0920 21:50:04.364490   47651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 21:50:04.385950   47651 command_runner.go:130] > Certificate will not expire
	I0920 21:50:04.386157   47651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 21:50:04.392216   47651 command_runner.go:130] > Certificate will not expire
	I0920 21:50:04.392309   47651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 21:50:04.398206   47651 command_runner.go:130] > Certificate will not expire
	I0920 21:50:04.398275   47651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 21:50:04.404128   47651 command_runner.go:130] > Certificate will not expire
	I0920 21:50:04.404206   47651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 21:50:04.409828   47651 command_runner.go:130] > Certificate will not expire
	I0920 21:50:04.409999   47651 kubeadm.go:392] StartCluster: {Name:multinode-900777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-900777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:50:04.410129   47651 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 21:50:04.410182   47651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 21:50:04.454898   47651 command_runner.go:130] > 6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700
	I0920 21:50:04.454929   47651 command_runner.go:130] > 18292c905057d024e4b4f79c3d9debea344f517f9ee8862727573806bbf2e218
	I0920 21:50:04.454937   47651 command_runner.go:130] > 582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69
	I0920 21:50:04.454947   47651 command_runner.go:130] > b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71
	I0920 21:50:04.454955   47651 command_runner.go:130] > c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf
	I0920 21:50:04.454977   47651 command_runner.go:130] > 50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2
	I0920 21:50:04.454985   47651 command_runner.go:130] > a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c
	I0920 21:50:04.455000   47651 command_runner.go:130] > e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded
	I0920 21:50:04.455020   47651 cri.go:89] found id: "6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700"
	I0920 21:50:04.455031   47651 cri.go:89] found id: "18292c905057d024e4b4f79c3d9debea344f517f9ee8862727573806bbf2e218"
	I0920 21:50:04.455037   47651 cri.go:89] found id: "582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69"
	I0920 21:50:04.455041   47651 cri.go:89] found id: "b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71"
	I0920 21:50:04.455046   47651 cri.go:89] found id: "c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf"
	I0920 21:50:04.455053   47651 cri.go:89] found id: "50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2"
	I0920 21:50:04.455057   47651 cri.go:89] found id: "a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c"
	I0920 21:50:04.455061   47651 cri.go:89] found id: "e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded"
	I0920 21:50:04.455065   47651 cri.go:89] found id: ""
	I0920 21:50:04.455113   47651 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.151751728Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869109151729168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=230070f0-0660-40b0-8077-d622fb3ae9a8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.152438568Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ea516a1-be07-4993-9f5e-5297df8037ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.152515928Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ea516a1-be07-4993-9f5e-5297df8037ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.152870690Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b67c7cba8719a32049e4aee34c28ca1fa38e057bb6af55ebcf1aaaf3d6dcdcc3,PodSandboxId:002a17acda15d99efc7f18487e1a07b00191315c2a6058721ba2e4b467062ca6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726869044227005857,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2eaf799ef1e114d03d8fe9185abf3d1452b91982ffd3a4cb07b23493f8d4dd,PodSandboxId:1e1f03dc85c5ea8b5feeffdfe16fb3f5e46382781396323056c3801189a23388,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726869010734485903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c0d06b49a314365610d58586a4fea3c88b6ac213fe03af9f77dadcf2f3fcb0f,PodSandboxId:373bd526668d676630150383b1136e622e26cdb9a836e1c549f9e1f4b3b99110,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726869010668734895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b969bded3ff4aef2da55e60e9ced5e18b8d96d0d048f4e00f573d49cca628577,PodSandboxId:c764e663a1bb1fc0b3c6dc48bdba681e1a1e83a59d2148e852bcbae344b6255c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726869010516122385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89-25d48c6a3159,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e9bd6143b497e036ef039ae65ce4d9eefed2027705383694b823b491e342bb,PodSandboxId:b4bfdb16e2b388b3c716616a44265ba1f22e34c8709e000e25f5d73b37f93e76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726869010540995575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a5eed7a8f74103c2fbc42a3fb9888b4b9de3c71f2b55703c39865edd7121d2,PodSandboxId:4608b5d9b987bfb6d083b69707304259fd8cc4b6cb57fa3cb25f28ed5f667bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726869006718968551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3d28feb205eb0202d9564b22496d32dcbd98dcd81fd0e9b25550f02a1b4453c,PodSandboxId:6086caea199f724c557dd3cd5a7d0b6cea3b1a10a89783414e0e5e668cd6e280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726869006724273726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63051e73e23ce7f5895e91a476b9224e3f5ced60b393fd21ae626596c804795,PodSandboxId:95dfcfa5b87014e54ec2e773f9b92ebb439cdd63f726d03b7bb8adc7bbdedb9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726869006655933247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4e566041f5a64eb6bce4e847954237a08871eaea900cf5beceb05f1652441d,PodSandboxId:daa96fb0b4999e22cc068871b578afd1d6c594d2a50b910e90f628edeec11292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726869006570861754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb805c9f26bae0c8d903fe2e8348e7ab09b471a2e1f66ed1272a1db9609454,PodSandboxId:f36cbf240c003ce08be7b934fa971263d6f9b704ba12cd4f2eaeff402d0ef932,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726868676463834955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700,PodSandboxId:20fe1c78f16126f484de74a9aa4a7def8fbcca40cf317d91456ef8d4ee2e1d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726868620713738655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18292c905057d024e4b4f79c3d9debea344f517f9ee8862727573806bbf2e218,PodSandboxId:e67606f45d94f0c6b4a217acc1be29a70acd15cae8e96b6be564f4115288cfcb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726868620664200244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71,PodSandboxId:0a31e2d64ec9c6cd53cfc7211ac1f08c1cda55df95229288dab5fde5ab05169f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726868608497884196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69,PodSandboxId:98f6fdf0aa6c6b653a9c8a72694a042f7e54ad0afabde59d67352f860e9c6fcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726868608615212878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89
-25d48c6a3159,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf,PodSandboxId:7e8ad24f26573237a25af82716e76ac2e2dbd79764d301e982764a3fcfb9fed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726868597584457509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2,PodSandboxId:38ec06816887a48b64a10f9f41b01ba7b94a495bcee0e34dbf41b2b652328935,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726868597579548680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c,PodSandboxId:6f66940a17df926a30d26a32d47576cbc4fd3b40ba7c89e1cc47bda9cd297c20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726868597529634055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded,PodSandboxId:3300e73a467f1209408708c9808f8f80889ea04afae94a100dcdbe961818a648,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726868597487676157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ea516a1-be07-4993-9f5e-5297df8037ba name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.198231126Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f016d0f1-94db-463d-b398-d90844324bc4 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.198309537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f016d0f1-94db-463d-b398-d90844324bc4 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.199586435Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=21848eb0-866f-41d6-b6d4-7f603265fcf7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.199998445Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869109199975549,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=21848eb0-866f-41d6-b6d4-7f603265fcf7 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.200905899Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae659cb7-9a8f-456f-86e2-3d086bc31a5b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.201006012Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae659cb7-9a8f-456f-86e2-3d086bc31a5b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.201396433Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b67c7cba8719a32049e4aee34c28ca1fa38e057bb6af55ebcf1aaaf3d6dcdcc3,PodSandboxId:002a17acda15d99efc7f18487e1a07b00191315c2a6058721ba2e4b467062ca6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726869044227005857,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2eaf799ef1e114d03d8fe9185abf3d1452b91982ffd3a4cb07b23493f8d4dd,PodSandboxId:1e1f03dc85c5ea8b5feeffdfe16fb3f5e46382781396323056c3801189a23388,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726869010734485903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c0d06b49a314365610d58586a4fea3c88b6ac213fe03af9f77dadcf2f3fcb0f,PodSandboxId:373bd526668d676630150383b1136e622e26cdb9a836e1c549f9e1f4b3b99110,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726869010668734895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b969bded3ff4aef2da55e60e9ced5e18b8d96d0d048f4e00f573d49cca628577,PodSandboxId:c764e663a1bb1fc0b3c6dc48bdba681e1a1e83a59d2148e852bcbae344b6255c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726869010516122385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89-25d48c6a3159,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e9bd6143b497e036ef039ae65ce4d9eefed2027705383694b823b491e342bb,PodSandboxId:b4bfdb16e2b388b3c716616a44265ba1f22e34c8709e000e25f5d73b37f93e76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726869010540995575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a5eed7a8f74103c2fbc42a3fb9888b4b9de3c71f2b55703c39865edd7121d2,PodSandboxId:4608b5d9b987bfb6d083b69707304259fd8cc4b6cb57fa3cb25f28ed5f667bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726869006718968551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3d28feb205eb0202d9564b22496d32dcbd98dcd81fd0e9b25550f02a1b4453c,PodSandboxId:6086caea199f724c557dd3cd5a7d0b6cea3b1a10a89783414e0e5e668cd6e280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726869006724273726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63051e73e23ce7f5895e91a476b9224e3f5ced60b393fd21ae626596c804795,PodSandboxId:95dfcfa5b87014e54ec2e773f9b92ebb439cdd63f726d03b7bb8adc7bbdedb9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726869006655933247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4e566041f5a64eb6bce4e847954237a08871eaea900cf5beceb05f1652441d,PodSandboxId:daa96fb0b4999e22cc068871b578afd1d6c594d2a50b910e90f628edeec11292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726869006570861754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb805c9f26bae0c8d903fe2e8348e7ab09b471a2e1f66ed1272a1db9609454,PodSandboxId:f36cbf240c003ce08be7b934fa971263d6f9b704ba12cd4f2eaeff402d0ef932,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726868676463834955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700,PodSandboxId:20fe1c78f16126f484de74a9aa4a7def8fbcca40cf317d91456ef8d4ee2e1d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726868620713738655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18292c905057d024e4b4f79c3d9debea344f517f9ee8862727573806bbf2e218,PodSandboxId:e67606f45d94f0c6b4a217acc1be29a70acd15cae8e96b6be564f4115288cfcb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726868620664200244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71,PodSandboxId:0a31e2d64ec9c6cd53cfc7211ac1f08c1cda55df95229288dab5fde5ab05169f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726868608497884196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69,PodSandboxId:98f6fdf0aa6c6b653a9c8a72694a042f7e54ad0afabde59d67352f860e9c6fcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726868608615212878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89
-25d48c6a3159,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf,PodSandboxId:7e8ad24f26573237a25af82716e76ac2e2dbd79764d301e982764a3fcfb9fed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726868597584457509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2,PodSandboxId:38ec06816887a48b64a10f9f41b01ba7b94a495bcee0e34dbf41b2b652328935,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726868597579548680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c,PodSandboxId:6f66940a17df926a30d26a32d47576cbc4fd3b40ba7c89e1cc47bda9cd297c20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726868597529634055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded,PodSandboxId:3300e73a467f1209408708c9808f8f80889ea04afae94a100dcdbe961818a648,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726868597487676157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae659cb7-9a8f-456f-86e2-3d086bc31a5b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.243845456Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b3dce86-8635-4987-b7b4-c42aa42529b8 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.243918618Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b3dce86-8635-4987-b7b4-c42aa42529b8 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.245268789Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8a4c282-9661-488c-a8cc-f5067f7b97ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.245675446Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869109245652911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8a4c282-9661-488c-a8cc-f5067f7b97ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.246220039Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=076296f9-bc49-4abd-ae12-671b824626d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.246304881Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=076296f9-bc49-4abd-ae12-671b824626d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.246637197Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b67c7cba8719a32049e4aee34c28ca1fa38e057bb6af55ebcf1aaaf3d6dcdcc3,PodSandboxId:002a17acda15d99efc7f18487e1a07b00191315c2a6058721ba2e4b467062ca6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726869044227005857,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2eaf799ef1e114d03d8fe9185abf3d1452b91982ffd3a4cb07b23493f8d4dd,PodSandboxId:1e1f03dc85c5ea8b5feeffdfe16fb3f5e46382781396323056c3801189a23388,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726869010734485903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c0d06b49a314365610d58586a4fea3c88b6ac213fe03af9f77dadcf2f3fcb0f,PodSandboxId:373bd526668d676630150383b1136e622e26cdb9a836e1c549f9e1f4b3b99110,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726869010668734895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b969bded3ff4aef2da55e60e9ced5e18b8d96d0d048f4e00f573d49cca628577,PodSandboxId:c764e663a1bb1fc0b3c6dc48bdba681e1a1e83a59d2148e852bcbae344b6255c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726869010516122385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89-25d48c6a3159,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e9bd6143b497e036ef039ae65ce4d9eefed2027705383694b823b491e342bb,PodSandboxId:b4bfdb16e2b388b3c716616a44265ba1f22e34c8709e000e25f5d73b37f93e76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726869010540995575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a5eed7a8f74103c2fbc42a3fb9888b4b9de3c71f2b55703c39865edd7121d2,PodSandboxId:4608b5d9b987bfb6d083b69707304259fd8cc4b6cb57fa3cb25f28ed5f667bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726869006718968551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3d28feb205eb0202d9564b22496d32dcbd98dcd81fd0e9b25550f02a1b4453c,PodSandboxId:6086caea199f724c557dd3cd5a7d0b6cea3b1a10a89783414e0e5e668cd6e280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726869006724273726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63051e73e23ce7f5895e91a476b9224e3f5ced60b393fd21ae626596c804795,PodSandboxId:95dfcfa5b87014e54ec2e773f9b92ebb439cdd63f726d03b7bb8adc7bbdedb9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726869006655933247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4e566041f5a64eb6bce4e847954237a08871eaea900cf5beceb05f1652441d,PodSandboxId:daa96fb0b4999e22cc068871b578afd1d6c594d2a50b910e90f628edeec11292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726869006570861754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb805c9f26bae0c8d903fe2e8348e7ab09b471a2e1f66ed1272a1db9609454,PodSandboxId:f36cbf240c003ce08be7b934fa971263d6f9b704ba12cd4f2eaeff402d0ef932,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726868676463834955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700,PodSandboxId:20fe1c78f16126f484de74a9aa4a7def8fbcca40cf317d91456ef8d4ee2e1d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726868620713738655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18292c905057d024e4b4f79c3d9debea344f517f9ee8862727573806bbf2e218,PodSandboxId:e67606f45d94f0c6b4a217acc1be29a70acd15cae8e96b6be564f4115288cfcb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726868620664200244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71,PodSandboxId:0a31e2d64ec9c6cd53cfc7211ac1f08c1cda55df95229288dab5fde5ab05169f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726868608497884196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69,PodSandboxId:98f6fdf0aa6c6b653a9c8a72694a042f7e54ad0afabde59d67352f860e9c6fcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726868608615212878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89
-25d48c6a3159,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf,PodSandboxId:7e8ad24f26573237a25af82716e76ac2e2dbd79764d301e982764a3fcfb9fed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726868597584457509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2,PodSandboxId:38ec06816887a48b64a10f9f41b01ba7b94a495bcee0e34dbf41b2b652328935,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726868597579548680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c,PodSandboxId:6f66940a17df926a30d26a32d47576cbc4fd3b40ba7c89e1cc47bda9cd297c20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726868597529634055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded,PodSandboxId:3300e73a467f1209408708c9808f8f80889ea04afae94a100dcdbe961818a648,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726868597487676157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=076296f9-bc49-4abd-ae12-671b824626d4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.289220300Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69970797-83bb-4907-807c-82ce39830e76 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.289318717Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69970797-83bb-4907-807c-82ce39830e76 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.290458844Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06366a00-4e07-43e0-8fb1-a099ab2b6432 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.290849746Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869109290828751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06366a00-4e07-43e0-8fb1-a099ab2b6432 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.291649167Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6aca6cf7-93e6-4cfc-996c-0391f1ea3eef name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.291718791Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6aca6cf7-93e6-4cfc-996c-0391f1ea3eef name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:51:49 multinode-900777 crio[2735]: time="2024-09-20 21:51:49.292130482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b67c7cba8719a32049e4aee34c28ca1fa38e057bb6af55ebcf1aaaf3d6dcdcc3,PodSandboxId:002a17acda15d99efc7f18487e1a07b00191315c2a6058721ba2e4b467062ca6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726869044227005857,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2eaf799ef1e114d03d8fe9185abf3d1452b91982ffd3a4cb07b23493f8d4dd,PodSandboxId:1e1f03dc85c5ea8b5feeffdfe16fb3f5e46382781396323056c3801189a23388,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726869010734485903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c0d06b49a314365610d58586a4fea3c88b6ac213fe03af9f77dadcf2f3fcb0f,PodSandboxId:373bd526668d676630150383b1136e622e26cdb9a836e1c549f9e1f4b3b99110,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726869010668734895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b969bded3ff4aef2da55e60e9ced5e18b8d96d0d048f4e00f573d49cca628577,PodSandboxId:c764e663a1bb1fc0b3c6dc48bdba681e1a1e83a59d2148e852bcbae344b6255c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726869010516122385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89-25d48c6a3159,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e9bd6143b497e036ef039ae65ce4d9eefed2027705383694b823b491e342bb,PodSandboxId:b4bfdb16e2b388b3c716616a44265ba1f22e34c8709e000e25f5d73b37f93e76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726869010540995575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a5eed7a8f74103c2fbc42a3fb9888b4b9de3c71f2b55703c39865edd7121d2,PodSandboxId:4608b5d9b987bfb6d083b69707304259fd8cc4b6cb57fa3cb25f28ed5f667bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726869006718968551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3d28feb205eb0202d9564b22496d32dcbd98dcd81fd0e9b25550f02a1b4453c,PodSandboxId:6086caea199f724c557dd3cd5a7d0b6cea3b1a10a89783414e0e5e668cd6e280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726869006724273726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63051e73e23ce7f5895e91a476b9224e3f5ced60b393fd21ae626596c804795,PodSandboxId:95dfcfa5b87014e54ec2e773f9b92ebb439cdd63f726d03b7bb8adc7bbdedb9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726869006655933247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4e566041f5a64eb6bce4e847954237a08871eaea900cf5beceb05f1652441d,PodSandboxId:daa96fb0b4999e22cc068871b578afd1d6c594d2a50b910e90f628edeec11292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726869006570861754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb805c9f26bae0c8d903fe2e8348e7ab09b471a2e1f66ed1272a1db9609454,PodSandboxId:f36cbf240c003ce08be7b934fa971263d6f9b704ba12cd4f2eaeff402d0ef932,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726868676463834955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700,PodSandboxId:20fe1c78f16126f484de74a9aa4a7def8fbcca40cf317d91456ef8d4ee2e1d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726868620713738655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18292c905057d024e4b4f79c3d9debea344f517f9ee8862727573806bbf2e218,PodSandboxId:e67606f45d94f0c6b4a217acc1be29a70acd15cae8e96b6be564f4115288cfcb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726868620664200244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71,PodSandboxId:0a31e2d64ec9c6cd53cfc7211ac1f08c1cda55df95229288dab5fde5ab05169f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726868608497884196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69,PodSandboxId:98f6fdf0aa6c6b653a9c8a72694a042f7e54ad0afabde59d67352f860e9c6fcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726868608615212878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89
-25d48c6a3159,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf,PodSandboxId:7e8ad24f26573237a25af82716e76ac2e2dbd79764d301e982764a3fcfb9fed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726868597584457509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2,PodSandboxId:38ec06816887a48b64a10f9f41b01ba7b94a495bcee0e34dbf41b2b652328935,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726868597579548680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c,PodSandboxId:6f66940a17df926a30d26a32d47576cbc4fd3b40ba7c89e1cc47bda9cd297c20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726868597529634055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded,PodSandboxId:3300e73a467f1209408708c9808f8f80889ea04afae94a100dcdbe961818a648,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726868597487676157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6aca6cf7-93e6-4cfc-996c-0391f1ea3eef name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b67c7cba8719a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   002a17acda15d       busybox-7dff88458-ggs4l
	ce2eaf799ef1e       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      About a minute ago   Running             kindnet-cni               1                   1e1f03dc85c5e       kindnet-kqbn9
	5c0d06b49a314       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      About a minute ago   Running             coredns                   1                   373bd526668d6       coredns-7c65d6cfc9-jnskb
	c0e9bd6143b49       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   b4bfdb16e2b38       storage-provisioner
	b969bded3ff4a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      About a minute ago   Running             kube-proxy                1                   c764e663a1bb1       kube-proxy-b7xqf
	a3d28feb205eb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      About a minute ago   Running             kube-controller-manager   1                   6086caea199f7       kube-controller-manager-multinode-900777
	23a5eed7a8f74       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      About a minute ago   Running             kube-scheduler            1                   4608b5d9b987b       kube-scheduler-multinode-900777
	a63051e73e23c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      About a minute ago   Running             kube-apiserver            1                   95dfcfa5b8701       kube-apiserver-multinode-900777
	cd4e566041f5a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      About a minute ago   Running             etcd                      1                   daa96fb0b4999       etcd-multinode-900777
	17bb805c9f26b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   f36cbf240c003       busybox-7dff88458-ggs4l
	6f311f2a53987       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      8 minutes ago        Exited              coredns                   0                   20fe1c78f1612       coredns-7c65d6cfc9-jnskb
	18292c905057d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   e67606f45d94f       storage-provisioner
	582dd3e8149a0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      8 minutes ago        Exited              kube-proxy                0                   98f6fdf0aa6c6       kube-proxy-b7xqf
	b6d798e155657       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      8 minutes ago        Exited              kindnet-cni               0                   0a31e2d64ec9c       kindnet-kqbn9
	c67f551b7827a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      8 minutes ago        Exited              kube-scheduler            0                   7e8ad24f26573       kube-scheduler-multinode-900777
	50530a2c31bb0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      8 minutes ago        Exited              etcd                      0                   38ec06816887a       etcd-multinode-900777
	a0931aca8b32f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      8 minutes ago        Exited              kube-controller-manager   0                   6f66940a17df9       kube-controller-manager-multinode-900777
	e7fd49d874245       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      8 minutes ago        Exited              kube-apiserver            0                   3300e73a467f1       kube-apiserver-multinode-900777
	
	
	==> coredns [5c0d06b49a314365610d58586a4fea3c88b6ac213fe03af9f77dadcf2f3fcb0f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49289 - 29305 "HINFO IN 5010109645727926349.4505930523966218435. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009798702s
	
	
	==> coredns [6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700] <==
	[INFO] 10.244.0.3:54063 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001915783s
	[INFO] 10.244.0.3:41011 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091487s
	[INFO] 10.244.0.3:43261 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058762s
	[INFO] 10.244.0.3:47557 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00141351s
	[INFO] 10.244.0.3:43181 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062993s
	[INFO] 10.244.0.3:53004 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079081s
	[INFO] 10.244.0.3:38072 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052934s
	[INFO] 10.244.1.2:37783 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153201s
	[INFO] 10.244.1.2:42084 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016385s
	[INFO] 10.244.1.2:55726 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010629s
	[INFO] 10.244.1.2:45953 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122683s
	[INFO] 10.244.0.3:49442 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131355s
	[INFO] 10.244.0.3:50398 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125338s
	[INFO] 10.244.0.3:52137 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013082s
	[INFO] 10.244.0.3:48203 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008742s
	[INFO] 10.244.1.2:44751 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013303s
	[INFO] 10.244.1.2:51394 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237966s
	[INFO] 10.244.1.2:58870 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141949s
	[INFO] 10.244.1.2:56924 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000133619s
	[INFO] 10.244.0.3:46117 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141663s
	[INFO] 10.244.0.3:34248 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112414s
	[INFO] 10.244.0.3:35371 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000081051s
	[INFO] 10.244.0.3:47773 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083032s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-900777
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-900777
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=multinode-900777
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T21_43_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:43:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-900777
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:51:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:50:09 +0000   Fri, 20 Sep 2024 21:43:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:50:09 +0000   Fri, 20 Sep 2024 21:43:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:50:09 +0000   Fri, 20 Sep 2024 21:43:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:50:09 +0000   Fri, 20 Sep 2024 21:43:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    multinode-900777
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e5c0de8ced04603878beaf61e2a1f5c
	  System UUID:                5e5c0de8-ced0-4603-878b-eaf61e2a1f5c
	  Boot ID:                    9bc3e81b-7635-43d7-95ad-dbda41fde27e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ggs4l                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m16s
	  kube-system                 coredns-7c65d6cfc9-jnskb                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     8m22s
	  kube-system                 etcd-multinode-900777                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         8m28s
	  kube-system                 kindnet-kqbn9                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      8m22s
	  kube-system                 kube-apiserver-multinode-900777             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-controller-manager-multinode-900777    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-proxy-b7xqf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 kube-scheduler-multinode-900777             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m20s                  kube-proxy       
	  Normal  Starting                 98s                    kube-proxy       
	  Normal  Starting                 8m33s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  8m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m32s (x8 over 8m33s)  kubelet          Node multinode-900777 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m32s (x8 over 8m33s)  kubelet          Node multinode-900777 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m32s (x7 over 8m33s)  kubelet          Node multinode-900777 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    8m26s                  kubelet          Node multinode-900777 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  8m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m26s                  kubelet          Node multinode-900777 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     8m26s                  kubelet          Node multinode-900777 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m26s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m22s                  node-controller  Node multinode-900777 event: Registered Node multinode-900777 in Controller
	  Normal  NodeReady                8m10s                  kubelet          Node multinode-900777 status is now: NodeReady
	  Normal  Starting                 104s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)    kubelet          Node multinode-900777 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)    kubelet          Node multinode-900777 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)    kubelet          Node multinode-900777 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                    node-controller  Node multinode-900777 event: Registered Node multinode-900777 in Controller
	
	
	Name:               multinode-900777-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-900777-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=multinode-900777
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_50_51_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:50:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-900777-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:51:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:51:21 +0000   Fri, 20 Sep 2024 21:50:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:51:21 +0000   Fri, 20 Sep 2024 21:50:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:51:21 +0000   Fri, 20 Sep 2024 21:50:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:51:21 +0000   Fri, 20 Sep 2024 21:51:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    multinode-900777-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aac29acd9c27493a8fe0dfbc425b686c
	  System UUID:                aac29acd-9c27-493a-8fe0-dfbc425b686c
	  Boot ID:                    35a12f4b-ee8e-4b62-9b15-87db6c8e7c97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8ggmr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kindnet-7b5r2              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      7m36s
	  kube-system                 kube-proxy-rghsp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m31s                  kube-proxy  
	  Normal  Starting                 54s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m36s (x2 over 7m37s)  kubelet     Node multinode-900777-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m36s (x2 over 7m37s)  kubelet     Node multinode-900777-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m36s (x2 over 7m37s)  kubelet     Node multinode-900777-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m36s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m18s                  kubelet     Node multinode-900777-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  59s (x2 over 59s)      kubelet     Node multinode-900777-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x2 over 59s)      kubelet     Node multinode-900777-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x2 over 59s)      kubelet     Node multinode-900777-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                41s                    kubelet     Node multinode-900777-m02 status is now: NodeReady
	
	
	Name:               multinode-900777-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-900777-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=multinode-900777
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_51_28_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:51:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-900777-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:51:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:51:46 +0000   Fri, 20 Sep 2024 21:51:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:51:46 +0000   Fri, 20 Sep 2024 21:51:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:51:46 +0000   Fri, 20 Sep 2024 21:51:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:51:46 +0000   Fri, 20 Sep 2024 21:51:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    multinode-900777-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 945cda9e414145cba58a008d45c2ad92
	  System UUID:                945cda9e-4141-45cb-a58a-008d45c2ad92
	  Boot ID:                    61a6ae13-3956-4529-afbe-ace4d59f45ed
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-xbgx7       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m41s
	  kube-system                 kube-proxy-665pn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m47s                  kube-proxy       
	  Normal  Starting                 6m36s                  kube-proxy       
	  Normal  Starting                 16s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m41s (x2 over 6m42s)  kubelet          Node multinode-900777-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m41s (x2 over 6m42s)  kubelet          Node multinode-900777-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m41s (x2 over 6m42s)  kubelet          Node multinode-900777-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m22s                  kubelet          Node multinode-900777-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m52s (x2 over 5m52s)  kubelet          Node multinode-900777-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m52s (x2 over 5m52s)  kubelet          Node multinode-900777-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m52s (x2 over 5m52s)  kubelet          Node multinode-900777-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m33s                  kubelet          Node multinode-900777-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  21s (x2 over 21s)      kubelet          Node multinode-900777-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x2 over 21s)      kubelet          Node multinode-900777-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x2 over 21s)      kubelet          Node multinode-900777-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16s                    node-controller  Node multinode-900777-m03 event: Registered Node multinode-900777-m03 in Controller
	  Normal  NodeReady                3s                     kubelet          Node multinode-900777-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.057422] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.183705] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.127886] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.288084] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.005940] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +5.012101] systemd-fstab-generator[893]: Ignoring "noauto" option for root device
	[  +0.061195] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.501434] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +0.081672] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.631084] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +0.816500] kauditd_printk_skb: 43 callbacks suppressed
	[ +12.266700] kauditd_printk_skb: 38 callbacks suppressed
	[Sep20 21:44] kauditd_printk_skb: 14 callbacks suppressed
	[Sep20 21:49] systemd-fstab-generator[2660]: Ignoring "noauto" option for root device
	[  +0.151088] systemd-fstab-generator[2672]: Ignoring "noauto" option for root device
	[  +0.177757] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[  +0.149739] systemd-fstab-generator[2699]: Ignoring "noauto" option for root device
	[  +0.317927] systemd-fstab-generator[2727]: Ignoring "noauto" option for root device
	[Sep20 21:50] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.086622] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.810771] systemd-fstab-generator[2941]: Ignoring "noauto" option for root device
	[  +4.710755] kauditd_printk_skb: 74 callbacks suppressed
	[ +14.833751] systemd-fstab-generator[3781]: Ignoring "noauto" option for root device
	[  +0.102721] kauditd_printk_skb: 34 callbacks suppressed
	[ +18.803094] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2] <==
	{"level":"info","ts":"2024-09-20T21:43:18.031301Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T21:43:18.033379Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2236e2deb63504cb","local-member-attributes":"{Name:multinode-900777 ClientURLs:[https://192.168.39.150:2379]}","request-path":"/0/members/2236e2deb63504cb/attributes","cluster-id":"d5d2d7cf60dc9e96","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T21:43:18.035089Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T21:43:18.035483Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T21:43:18.036189Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d5d2d7cf60dc9e96","local-member-id":"2236e2deb63504cb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T21:43:18.036271Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T21:43:18.036307Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T21:43:18.036924Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T21:43:18.037719Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.150:2379"}
	{"level":"info","ts":"2024-09-20T21:43:18.043508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T21:43:18.044753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T21:43:18.043595Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T21:43:18.049813Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T21:45:08.064209Z","caller":"traceutil/trace.go:171","msg":"trace[556783716] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"116.534036ms","start":"2024-09-20T21:45:07.947615Z","end":"2024-09-20T21:45:08.064149Z","steps":["trace[556783716] 'process raft request'  (duration: 101.269657ms)","trace[556783716] 'compare'  (duration: 14.908228ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T21:46:02.323411Z","caller":"traceutil/trace.go:171","msg":"trace[2016325494] transaction","detail":"{read_only:false; response_revision:742; number_of_response:1; }","duration":"123.459253ms","start":"2024-09-20T21:46:02.199926Z","end":"2024-09-20T21:46:02.323385Z","steps":["trace[2016325494] 'process raft request'  (duration: 122.349125ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T21:48:21.940771Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T21:48:21.940903Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-900777","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.150:2380"],"advertise-client-urls":["https://192.168.39.150:2379"]}
	{"level":"warn","ts":"2024-09-20T21:48:21.942167Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T21:48:21.942333Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T21:48:22.024685Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.150:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T21:48:22.024857Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.150:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T21:48:22.024957Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2236e2deb63504cb","current-leader-member-id":"2236e2deb63504cb"}
	{"level":"info","ts":"2024-09-20T21:48:22.031386Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-09-20T21:48:22.031587Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-09-20T21:48:22.031615Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-900777","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.150:2380"],"advertise-client-urls":["https://192.168.39.150:2379"]}
	
	
	==> etcd [cd4e566041f5a64eb6bce4e847954237a08871eaea900cf5beceb05f1652441d] <==
	{"level":"info","ts":"2024-09-20T21:50:06.977412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb switched to configuration voters=(2465407292199470283)"}
	{"level":"info","ts":"2024-09-20T21:50:06.977633Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d5d2d7cf60dc9e96","local-member-id":"2236e2deb63504cb","added-peer-id":"2236e2deb63504cb","added-peer-peer-urls":["https://192.168.39.150:2380"]}
	{"level":"info","ts":"2024-09-20T21:50:06.977973Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d5d2d7cf60dc9e96","local-member-id":"2236e2deb63504cb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T21:50:06.978090Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T21:50:07.011818Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T21:50:07.013660Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"2236e2deb63504cb","initial-advertise-peer-urls":["https://192.168.39.150:2380"],"listen-peer-urls":["https://192.168.39.150:2380"],"advertise-client-urls":["https://192.168.39.150:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.150:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T21:50:07.013705Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T21:50:07.013334Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-09-20T21:50:07.013736Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-09-20T21:50:08.225118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T21:50:08.225249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T21:50:08.225311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb received MsgPreVoteResp from 2236e2deb63504cb at term 2"}
	{"level":"info","ts":"2024-09-20T21:50:08.225351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T21:50:08.225378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb received MsgVoteResp from 2236e2deb63504cb at term 3"}
	{"level":"info","ts":"2024-09-20T21:50:08.225406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb became leader at term 3"}
	{"level":"info","ts":"2024-09-20T21:50:08.225432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2236e2deb63504cb elected leader 2236e2deb63504cb at term 3"}
	{"level":"info","ts":"2024-09-20T21:50:08.232737Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T21:50:08.233864Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T21:50:08.234701Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T21:50:08.232686Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2236e2deb63504cb","local-member-attributes":"{Name:multinode-900777 ClientURLs:[https://192.168.39.150:2379]}","request-path":"/0/members/2236e2deb63504cb/attributes","cluster-id":"d5d2d7cf60dc9e96","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T21:50:08.235566Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T21:50:08.235818Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T21:50:08.235853Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T21:50:08.236604Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T21:50:08.237391Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.150:2379"}
	
	
	==> kernel <==
	 21:51:49 up 9 min,  0 users,  load average: 0.35, 0.18, 0.10
	Linux multinode-900777 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71] <==
	I0920 21:47:39.733970       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.4.0/24] 
	I0920 21:47:49.733797       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:47:49.733969       1 main.go:299] handling current node
	I0920 21:47:49.734003       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:47:49.734068       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:47:49.734229       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0920 21:47:49.734255       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.4.0/24] 
	I0920 21:47:59.731367       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:47:59.731467       1 main.go:299] handling current node
	I0920 21:47:59.731494       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:47:59.731500       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:47:59.731633       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0920 21:47:59.731658       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.4.0/24] 
	I0920 21:48:09.729138       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:48:09.729274       1 main.go:299] handling current node
	I0920 21:48:09.729314       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:48:09.729321       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:48:09.729494       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0920 21:48:09.729518       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.4.0/24] 
	I0920 21:48:19.727605       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:48:19.727725       1 main.go:299] handling current node
	I0920 21:48:19.727772       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:48:19.727791       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:48:19.727928       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0920 21:48:19.727949       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [ce2eaf799ef1e114d03d8fe9185abf3d1452b91982ffd3a4cb07b23493f8d4dd] <==
	I0920 21:51:11.740257       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0920 21:51:11.740307       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.4.0/24] 
	I0920 21:51:11.740513       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:51:11.740536       1 main.go:299] handling current node
	I0920 21:51:11.740547       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:51:11.740551       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:51:21.740088       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:51:21.740153       1 main.go:299] handling current node
	I0920 21:51:21.740182       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:51:21.740191       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:51:21.740368       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0920 21:51:21.740378       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.4.0/24] 
	I0920 21:51:31.739453       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:51:31.739514       1 main.go:299] handling current node
	I0920 21:51:31.739532       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:51:31.739540       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:51:31.739741       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0920 21:51:31.739783       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.2.0/24] 
	I0920 21:51:31.739856       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.68 Flags: [] Table: 0} 
	I0920 21:51:41.740601       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:51:41.740670       1 main.go:299] handling current node
	I0920 21:51:41.740684       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:51:41.740689       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:51:41.740810       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0920 21:51:41.740833       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [a63051e73e23ce7f5895e91a476b9224e3f5ced60b393fd21ae626596c804795] <==
	I0920 21:50:09.544632       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 21:50:09.550589       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 21:50:09.557990       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 21:50:09.558135       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 21:50:09.558527       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 21:50:09.558631       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 21:50:09.558703       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 21:50:09.558739       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 21:50:09.561483       1 aggregator.go:171] initial CRD sync complete...
	I0920 21:50:09.561535       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 21:50:09.561559       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 21:50:09.561582       1 cache.go:39] Caches are synced for autoregister controller
	I0920 21:50:09.576711       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 21:50:09.586012       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 21:50:09.606963       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 21:50:09.607136       1 policy_source.go:224] refreshing policies
	I0920 21:50:09.619249       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 21:50:10.456236       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 21:50:11.971962       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 21:50:12.096161       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 21:50:12.110202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 21:50:12.202890       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 21:50:12.210445       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 21:50:13.046713       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 21:50:13.146014       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded] <==
	W0920 21:48:21.966642       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966692       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966725       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966777       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966831       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966887       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966925       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966961       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.967009       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.967483       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.967538       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.967584       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.967630       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.967679       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.969207       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.969450       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970213       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970389       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970457       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970505       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970558       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970615       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970679       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970734       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970791       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c] <==
	I0920 21:45:56.783819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:45:56.785229       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-900777-m02"
	I0920 21:45:57.965371       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-900777-m03\" does not exist"
	I0920 21:45:57.966458       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-900777-m02"
	I0920 21:45:57.978491       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-900777-m03" podCIDRs=["10.244.4.0/24"]
	I0920 21:45:57.978637       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:45:57.978843       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:45:57.987155       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:45:58.334473       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:45:58.690545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:46:02.325485       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:46:08.123683       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:46:16.063007       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-900777-m02"
	I0920 21:46:16.063191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:46:16.082137       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:46:17.201175       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:46:57.219331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m02"
	I0920 21:46:57.219436       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-900777-m03"
	I0920 21:46:57.236631       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m02"
	I0920 21:46:57.281190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.093221ms"
	I0920 21:46:57.281278       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.403µs"
	I0920 21:47:02.278168       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:47:02.295551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:47:02.341536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m02"
	I0920 21:47:12.425995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	
	
	==> kube-controller-manager [a3d28feb205eb0202d9564b22496d32dcbd98dcd81fd0e9b25550f02a1b4453c] <==
	I0920 21:51:08.810243       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m02"
	I0920 21:51:08.829492       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="86.238µs"
	I0920 21:51:08.845347       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="71.466µs"
	I0920 21:51:11.495231       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.383761ms"
	I0920 21:51:11.495385       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="80.008µs"
	I0920 21:51:13.047616       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m02"
	I0920 21:51:21.446779       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m02"
	I0920 21:51:26.792873       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:26.819214       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:27.046912       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-900777-m02"
	I0920 21:51:27.047426       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:28.178415       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-900777-m02"
	I0920 21:51:28.180792       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-900777-m03\" does not exist"
	I0920 21:51:28.192908       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-900777-m03" podCIDRs=["10.244.2.0/24"]
	I0920 21:51:28.193306       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:28.193390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:28.202903       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:28.582112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:28.931858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:33.113890       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:38.276446       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:46.297350       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-900777-m02"
	I0920 21:51:46.297592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:46.310503       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:48.069787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	
	
	==> kube-proxy [582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 21:43:28.843494       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 21:43:28.912912       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.150"]
	E0920 21:43:28.913175       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 21:43:28.950442       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 21:43:28.950547       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 21:43:28.950592       1 server_linux.go:169] "Using iptables Proxier"
	I0920 21:43:28.953189       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 21:43:28.953613       1 server.go:483] "Version info" version="v1.31.1"
	I0920 21:43:28.953856       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:43:28.956425       1 config.go:199] "Starting service config controller"
	I0920 21:43:28.956521       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 21:43:28.956600       1 config.go:105] "Starting endpoint slice config controller"
	I0920 21:43:28.956644       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 21:43:28.957529       1 config.go:328] "Starting node config controller"
	I0920 21:43:28.959149       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 21:43:29.057242       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 21:43:29.057413       1 shared_informer.go:320] Caches are synced for service config
	I0920 21:43:29.059257       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b969bded3ff4aef2da55e60e9ced5e18b8d96d0d048f4e00f573d49cca628577] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 21:50:10.994121       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 21:50:11.007407       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.150"]
	E0920 21:50:11.007518       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 21:50:11.064058       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 21:50:11.064127       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 21:50:11.064154       1 server_linux.go:169] "Using iptables Proxier"
	I0920 21:50:11.068962       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 21:50:11.069291       1 server.go:483] "Version info" version="v1.31.1"
	I0920 21:50:11.069321       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:50:11.072356       1 config.go:199] "Starting service config controller"
	I0920 21:50:11.072441       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 21:50:11.072461       1 config.go:105] "Starting endpoint slice config controller"
	I0920 21:50:11.072465       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 21:50:11.073565       1 config.go:328] "Starting node config controller"
	I0920 21:50:11.073646       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 21:50:11.172724       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 21:50:11.172788       1 shared_informer.go:320] Caches are synced for service config
	I0920 21:50:11.174309       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23a5eed7a8f74103c2fbc42a3fb9888b4b9de3c71f2b55703c39865edd7121d2] <==
	I0920 21:50:07.628730       1 serving.go:386] Generated self-signed cert in-memory
	W0920 21:50:09.503291       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 21:50:09.503391       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0920 21:50:09.503426       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 21:50:09.503456       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 21:50:09.533462       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 21:50:09.533584       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:50:09.537639       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 21:50:09.537733       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 21:50:09.537766       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 21:50:09.537921       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 21:50:09.638351       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf] <==
	E0920 21:43:21.211148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.302399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 21:43:21.302466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.320135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 21:43:21.320188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.351467       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 21:43:21.353198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.374401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 21:43:21.374510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.418731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 21:43:21.418788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.473809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 21:43:21.473937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.522691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 21:43:21.522725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.542679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 21:43:21.542816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.554897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 21:43:21.554996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.833319       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 21:43:21.833518       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 21:43:24.012574       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 21:48:21.936369       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0920 21:48:21.936517       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0920 21:48:21.936688       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 20 21:50:16 multinode-900777 kubelet[2948]: E0920 21:50:16.013717    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869016012116870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:50:16 multinode-900777 kubelet[2948]: E0920 21:50:16.014180    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869016012116870,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:50:26 multinode-900777 kubelet[2948]: E0920 21:50:26.017299    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869026015873168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:50:26 multinode-900777 kubelet[2948]: E0920 21:50:26.017424    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869026015873168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:50:36 multinode-900777 kubelet[2948]: E0920 21:50:36.023489    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869036022604505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:50:36 multinode-900777 kubelet[2948]: E0920 21:50:36.023549    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869036022604505,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:50:46 multinode-900777 kubelet[2948]: E0920 21:50:46.025670    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869046025021183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:50:46 multinode-900777 kubelet[2948]: E0920 21:50:46.025743    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869046025021183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:50:56 multinode-900777 kubelet[2948]: E0920 21:50:56.031000    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869056030653115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:50:56 multinode-900777 kubelet[2948]: E0920 21:50:56.031407    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869056030653115,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:51:06 multinode-900777 kubelet[2948]: E0920 21:51:06.001517    2948 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 21:51:06 multinode-900777 kubelet[2948]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 21:51:06 multinode-900777 kubelet[2948]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 21:51:06 multinode-900777 kubelet[2948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:51:06 multinode-900777 kubelet[2948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:51:06 multinode-900777 kubelet[2948]: E0920 21:51:06.035548    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869066034729541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:51:06 multinode-900777 kubelet[2948]: E0920 21:51:06.035576    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869066034729541,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:51:16 multinode-900777 kubelet[2948]: E0920 21:51:16.042870    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869076041444530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:51:16 multinode-900777 kubelet[2948]: E0920 21:51:16.042923    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869076041444530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:51:26 multinode-900777 kubelet[2948]: E0920 21:51:26.048840    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869086048400333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:51:26 multinode-900777 kubelet[2948]: E0920 21:51:26.049104    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869086048400333,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:51:36 multinode-900777 kubelet[2948]: E0920 21:51:36.051419    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869096051175524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:51:36 multinode-900777 kubelet[2948]: E0920 21:51:36.051460    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869096051175524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:51:46 multinode-900777 kubelet[2948]: E0920 21:51:46.056772    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869106055666106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:51:46 multinode-900777 kubelet[2948]: E0920 21:51:46.056862    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869106055666106,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 21:51:48.859282   48755 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19672-9614/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-900777 -n multinode-900777
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-900777 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (331.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (144.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 stop
E0920 21:52:14.876744   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-900777 stop: exit status 82 (2m0.471569022s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-900777-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-900777 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 status
E0920 21:54:11.810081   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-900777 status: (18.829404498s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 status --alsologtostderr
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-900777 status --alsologtostderr: (3.360643486s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-900777 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-900777 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-900777 -n multinode-900777
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-900777 logs -n 25: (1.520151189s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-900777 cp multinode-900777-m02:/home/docker/cp-test.txt                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777:/home/docker/cp-test_multinode-900777-m02_multinode-900777.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n multinode-900777 sudo cat                                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-900777-m02_multinode-900777.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-900777 cp multinode-900777-m02:/home/docker/cp-test.txt                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m03:/home/docker/cp-test_multinode-900777-m02_multinode-900777-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n multinode-900777-m03 sudo cat                                   | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-900777-m02_multinode-900777-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-900777 cp testdata/cp-test.txt                                                | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-900777 cp multinode-900777-m03:/home/docker/cp-test.txt                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile8418019/001/cp-test_multinode-900777-m03.txt            |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-900777 cp multinode-900777-m03:/home/docker/cp-test.txt                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777:/home/docker/cp-test_multinode-900777-m03_multinode-900777.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n multinode-900777 sudo cat                                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-900777-m03_multinode-900777.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-900777 cp multinode-900777-m03:/home/docker/cp-test.txt                       | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m02:/home/docker/cp-test_multinode-900777-m03_multinode-900777-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n                                                                 | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | multinode-900777-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-900777 ssh -n multinode-900777-m02 sudo cat                                   | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	|         | /home/docker/cp-test_multinode-900777-m03_multinode-900777-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-900777 node stop m03                                                          | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:45 UTC |
	| node    | multinode-900777 node start                                                             | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:45 UTC | 20 Sep 24 21:46 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-900777                                                                | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:46 UTC |                     |
	| stop    | -p multinode-900777                                                                     | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:46 UTC |                     |
	| start   | -p multinode-900777                                                                     | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:48 UTC | 20 Sep 24 21:51 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-900777                                                                | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:51 UTC |                     |
	| node    | multinode-900777 node delete                                                            | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:51 UTC | 20 Sep 24 21:51 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-900777 stop                                                                   | multinode-900777 | jenkins | v1.34.0 | 20 Sep 24 21:51 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 21:48:20
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 21:48:20.784538   47651 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:48:20.784664   47651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:48:20.784672   47651 out.go:358] Setting ErrFile to fd 2...
	I0920 21:48:20.784676   47651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:48:20.784852   47651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 21:48:20.785392   47651 out.go:352] Setting JSON to false
	I0920 21:48:20.786260   47651 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5449,"bootTime":1726863452,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:48:20.786352   47651 start.go:139] virtualization: kvm guest
	I0920 21:48:20.792463   47651 out.go:177] * [multinode-900777] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 21:48:20.797343   47651 notify.go:220] Checking for updates...
	I0920 21:48:20.802650   47651 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:48:20.807172   47651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:48:20.808415   47651 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:48:20.809673   47651 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:48:20.810913   47651 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:48:20.812162   47651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:48:20.814081   47651 config.go:182] Loaded profile config "multinode-900777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:48:20.814211   47651 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:48:20.814906   47651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:48:20.814979   47651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:48:20.830028   47651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0920 21:48:20.830394   47651 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:48:20.830916   47651 main.go:141] libmachine: Using API Version  1
	I0920 21:48:20.830936   47651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:48:20.831282   47651 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:48:20.831453   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:48:20.866926   47651 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 21:48:20.868110   47651 start.go:297] selected driver: kvm2
	I0920 21:48:20.868122   47651 start.go:901] validating driver "kvm2" against &{Name:multinode-900777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-900777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspe
ktor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:48:20.868252   47651 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:48:20.868550   47651 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:48:20.868612   47651 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 21:48:20.883758   47651 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 21:48:20.884452   47651 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 21:48:20.884481   47651 cni.go:84] Creating CNI manager for ""
	I0920 21:48:20.884537   47651 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 21:48:20.884590   47651 start.go:340] cluster config:
	{Name:multinode-900777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-900777 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:48:20.884708   47651 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 21:48:20.887095   47651 out.go:177] * Starting "multinode-900777" primary control-plane node in "multinode-900777" cluster
	I0920 21:48:20.888661   47651 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:48:20.888696   47651 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 21:48:20.888703   47651 cache.go:56] Caching tarball of preloaded images
	I0920 21:48:20.888801   47651 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 21:48:20.888815   47651 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 21:48:20.888929   47651 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/config.json ...
	I0920 21:48:20.889131   47651 start.go:360] acquireMachinesLock for multinode-900777: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 21:48:20.889182   47651 start.go:364] duration metric: took 24.733µs to acquireMachinesLock for "multinode-900777"
	I0920 21:48:20.889201   47651 start.go:96] Skipping create...Using existing machine configuration
	I0920 21:48:20.889210   47651 fix.go:54] fixHost starting: 
	I0920 21:48:20.889491   47651 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:48:20.889528   47651 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:48:20.904368   47651 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37765
	I0920 21:48:20.904967   47651 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:48:20.905494   47651 main.go:141] libmachine: Using API Version  1
	I0920 21:48:20.905514   47651 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:48:20.905919   47651 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:48:20.906152   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:48:20.906361   47651 main.go:141] libmachine: (multinode-900777) Calling .GetState
	I0920 21:48:20.908349   47651 fix.go:112] recreateIfNeeded on multinode-900777: state=Running err=<nil>
	W0920 21:48:20.908372   47651 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 21:48:20.910449   47651 out.go:177] * Updating the running kvm2 "multinode-900777" VM ...
	I0920 21:48:20.911796   47651 machine.go:93] provisionDockerMachine start ...
	I0920 21:48:20.911822   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:48:20.912022   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:48:20.914661   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:20.915209   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:20.915264   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:20.915440   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:48:20.915621   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:20.915778   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:20.915913   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:48:20.916076   47651 main.go:141] libmachine: Using SSH client type: native
	I0920 21:48:20.916257   47651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0920 21:48:20.916269   47651 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 21:48:21.037220   47651 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-900777
	
	I0920 21:48:21.037249   47651 main.go:141] libmachine: (multinode-900777) Calling .GetMachineName
	I0920 21:48:21.037496   47651 buildroot.go:166] provisioning hostname "multinode-900777"
	I0920 21:48:21.037521   47651 main.go:141] libmachine: (multinode-900777) Calling .GetMachineName
	I0920 21:48:21.037675   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:48:21.040370   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.040761   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:21.040785   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.041000   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:48:21.041170   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.041347   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.041479   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:48:21.041633   47651 main.go:141] libmachine: Using SSH client type: native
	I0920 21:48:21.041794   47651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0920 21:48:21.041807   47651 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-900777 && echo "multinode-900777" | sudo tee /etc/hostname
	I0920 21:48:21.175986   47651 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-900777
	
	I0920 21:48:21.176017   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:48:21.178717   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.179086   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:21.179115   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.179320   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:48:21.179456   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.179549   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.179703   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:48:21.179939   47651 main.go:141] libmachine: Using SSH client type: native
	I0920 21:48:21.180150   47651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0920 21:48:21.180167   47651 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-900777' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-900777/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-900777' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 21:48:21.296471   47651 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 21:48:21.296505   47651 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 21:48:21.296542   47651 buildroot.go:174] setting up certificates
	I0920 21:48:21.296551   47651 provision.go:84] configureAuth start
	I0920 21:48:21.296562   47651 main.go:141] libmachine: (multinode-900777) Calling .GetMachineName
	I0920 21:48:21.296863   47651 main.go:141] libmachine: (multinode-900777) Calling .GetIP
	I0920 21:48:21.299333   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.299678   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:21.299703   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.299864   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:48:21.302094   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.302453   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:21.302482   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.302602   47651 provision.go:143] copyHostCerts
	I0920 21:48:21.302632   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:48:21.302672   47651 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 21:48:21.302681   47651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 21:48:21.302762   47651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 21:48:21.302879   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:48:21.302908   47651 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 21:48:21.302917   47651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 21:48:21.302957   47651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 21:48:21.303055   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:48:21.303072   47651 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 21:48:21.303079   47651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 21:48:21.303106   47651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 21:48:21.303260   47651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.multinode-900777 san=[127.0.0.1 192.168.39.150 localhost minikube multinode-900777]
	I0920 21:48:21.635775   47651 provision.go:177] copyRemoteCerts
	I0920 21:48:21.635831   47651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 21:48:21.635855   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:48:21.638820   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.639275   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:21.639305   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.639482   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:48:21.639637   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.639790   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:48:21.639945   47651 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/multinode-900777/id_rsa Username:docker}
	I0920 21:48:21.729689   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0920 21:48:21.729775   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 21:48:21.758179   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0920 21:48:21.758242   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0920 21:48:21.782574   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0920 21:48:21.782641   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 21:48:21.807415   47651 provision.go:87] duration metric: took 510.850527ms to configureAuth
	I0920 21:48:21.807444   47651 buildroot.go:189] setting minikube options for container-runtime
	I0920 21:48:21.807710   47651 config.go:182] Loaded profile config "multinode-900777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:48:21.807785   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:48:21.810282   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.810624   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:48:21.810643   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:48:21.810805   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:48:21.811017   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.811170   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:48:21.811292   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:48:21.811452   47651 main.go:141] libmachine: Using SSH client type: native
	I0920 21:48:21.811616   47651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0920 21:48:21.811631   47651 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 21:49:52.548542   47651 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 21:49:52.548572   47651 machine.go:96] duration metric: took 1m31.636756344s to provisionDockerMachine
	I0920 21:49:52.548586   47651 start.go:293] postStartSetup for "multinode-900777" (driver="kvm2")
	I0920 21:49:52.548600   47651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 21:49:52.548620   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:49:52.548899   47651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 21:49:52.548929   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:49:52.552016   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.552433   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:49:52.552452   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.552684   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:49:52.552864   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:49:52.553033   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:49:52.553176   47651 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/multinode-900777/id_rsa Username:docker}
	I0920 21:49:52.643405   47651 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 21:49:52.648061   47651 command_runner.go:130] > NAME=Buildroot
	I0920 21:49:52.648087   47651 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0920 21:49:52.648091   47651 command_runner.go:130] > ID=buildroot
	I0920 21:49:52.648097   47651 command_runner.go:130] > VERSION_ID=2023.02.9
	I0920 21:49:52.648102   47651 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0920 21:49:52.648155   47651 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 21:49:52.648170   47651 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 21:49:52.648239   47651 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 21:49:52.648327   47651 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 21:49:52.648336   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /etc/ssl/certs/167852.pem
	I0920 21:49:52.648432   47651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 21:49:52.658589   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:49:52.686272   47651 start.go:296] duration metric: took 137.67397ms for postStartSetup
	I0920 21:49:52.686308   47651 fix.go:56] duration metric: took 1m31.797098284s for fixHost
	I0920 21:49:52.686327   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:49:52.688869   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.689389   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:49:52.689420   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.689543   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:49:52.689752   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:49:52.689967   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:49:52.690175   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:49:52.690373   47651 main.go:141] libmachine: Using SSH client type: native
	I0920 21:49:52.690572   47651 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.150 22 <nil> <nil>}
	I0920 21:49:52.690587   47651 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 21:49:52.804190   47651 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726868992.772975243
	
	I0920 21:49:52.804216   47651 fix.go:216] guest clock: 1726868992.772975243
	I0920 21:49:52.804224   47651 fix.go:229] Guest: 2024-09-20 21:49:52.772975243 +0000 UTC Remote: 2024-09-20 21:49:52.686311783 +0000 UTC m=+91.937147563 (delta=86.66346ms)
	I0920 21:49:52.804253   47651 fix.go:200] guest clock delta is within tolerance: 86.66346ms
	I0920 21:49:52.804258   47651 start.go:83] releasing machines lock for "multinode-900777", held for 1m31.915064728s
	I0920 21:49:52.804275   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:49:52.804528   47651 main.go:141] libmachine: (multinode-900777) Calling .GetIP
	I0920 21:49:52.807221   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.807615   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:49:52.807643   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.807835   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:49:52.808365   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:49:52.808513   47651 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:49:52.808606   47651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 21:49:52.808651   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:49:52.808755   47651 ssh_runner.go:195] Run: cat /version.json
	I0920 21:49:52.808779   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:49:52.811371   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.811561   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.811740   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:49:52.811769   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.811925   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:49:52.811957   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:49:52.811939   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:49:52.812148   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:49:52.812181   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:49:52.812319   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:49:52.812340   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:49:52.812470   47651 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/multinode-900777/id_rsa Username:docker}
	I0920 21:49:52.812632   47651 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:49:52.812783   47651 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/multinode-900777/id_rsa Username:docker}
	I0920 21:49:52.896347   47651 command_runner.go:130] > {"iso_version": "v1.34.0-1726784654-19672", "kicbase_version": "v0.0.45-1726589491-19662", "minikube_version": "v1.34.0", "commit": "342ed9b49b7fd0c6b2cb4410be5c5d5251f51ed8"}
	I0920 21:49:52.896702   47651 ssh_runner.go:195] Run: systemctl --version
	I0920 21:49:52.926088   47651 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0920 21:49:52.926765   47651 command_runner.go:130] > systemd 252 (252)
	I0920 21:49:52.926806   47651 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0920 21:49:52.926883   47651 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 21:49:53.088950   47651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 21:49:53.095570   47651 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0920 21:49:53.095961   47651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 21:49:53.096040   47651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 21:49:53.105797   47651 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 21:49:53.105822   47651 start.go:495] detecting cgroup driver to use...
	I0920 21:49:53.105890   47651 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 21:49:53.123258   47651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 21:49:53.140103   47651 docker.go:217] disabling cri-docker service (if available) ...
	I0920 21:49:53.140167   47651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 21:49:53.155312   47651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 21:49:53.169497   47651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 21:49:53.319742   47651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 21:49:53.467637   47651 docker.go:233] disabling docker service ...
	I0920 21:49:53.467719   47651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 21:49:53.485181   47651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 21:49:53.500565   47651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 21:49:53.650384   47651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 21:49:53.798749   47651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 21:49:53.813840   47651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 21:49:53.833484   47651 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0920 21:49:53.833756   47651 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 21:49:53.833825   47651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.845284   47651 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 21:49:53.845378   47651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.856891   47651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.869921   47651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.882686   47651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 21:49:53.897199   47651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.910301   47651 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.922227   47651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 21:49:53.935444   47651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 21:49:53.948038   47651 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0920 21:49:53.948104   47651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 21:49:53.960626   47651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:49:54.109666   47651 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 21:50:03.438119   47651 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.328416548s)
	I0920 21:50:03.438152   47651 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 21:50:03.438197   47651 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 21:50:03.444113   47651 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0920 21:50:03.444141   47651 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0920 21:50:03.444156   47651 command_runner.go:130] > Device: 0,22	Inode: 1339        Links: 1
	I0920 21:50:03.444165   47651 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 21:50:03.444173   47651 command_runner.go:130] > Access: 2024-09-20 21:50:03.284951290 +0000
	I0920 21:50:03.444206   47651 command_runner.go:130] > Modify: 2024-09-20 21:50:03.284951290 +0000
	I0920 21:50:03.444218   47651 command_runner.go:130] > Change: 2024-09-20 21:50:03.284951290 +0000
	I0920 21:50:03.444224   47651 command_runner.go:130] >  Birth: -
	I0920 21:50:03.444246   47651 start.go:563] Will wait 60s for crictl version
	I0920 21:50:03.444288   47651 ssh_runner.go:195] Run: which crictl
	I0920 21:50:03.448294   47651 command_runner.go:130] > /usr/bin/crictl
	I0920 21:50:03.448362   47651 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 21:50:03.488510   47651 command_runner.go:130] > Version:  0.1.0
	I0920 21:50:03.488536   47651 command_runner.go:130] > RuntimeName:  cri-o
	I0920 21:50:03.488540   47651 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0920 21:50:03.488545   47651 command_runner.go:130] > RuntimeApiVersion:  v1
	I0920 21:50:03.489715   47651 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 21:50:03.489792   47651 ssh_runner.go:195] Run: crio --version
	I0920 21:50:03.522583   47651 command_runner.go:130] > crio version 1.29.1
	I0920 21:50:03.522612   47651 command_runner.go:130] > Version:        1.29.1
	I0920 21:50:03.522619   47651 command_runner.go:130] > GitCommit:      unknown
	I0920 21:50:03.522624   47651 command_runner.go:130] > GitCommitDate:  unknown
	I0920 21:50:03.522628   47651 command_runner.go:130] > GitTreeState:   clean
	I0920 21:50:03.522634   47651 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0920 21:50:03.522639   47651 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 21:50:03.522642   47651 command_runner.go:130] > Compiler:       gc
	I0920 21:50:03.522647   47651 command_runner.go:130] > Platform:       linux/amd64
	I0920 21:50:03.522651   47651 command_runner.go:130] > Linkmode:       dynamic
	I0920 21:50:03.522655   47651 command_runner.go:130] > BuildTags:      
	I0920 21:50:03.522659   47651 command_runner.go:130] >   containers_image_ostree_stub
	I0920 21:50:03.522663   47651 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 21:50:03.522668   47651 command_runner.go:130] >   btrfs_noversion
	I0920 21:50:03.522675   47651 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 21:50:03.522682   47651 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 21:50:03.522688   47651 command_runner.go:130] >   seccomp
	I0920 21:50:03.522695   47651 command_runner.go:130] > LDFlags:          unknown
	I0920 21:50:03.522703   47651 command_runner.go:130] > SeccompEnabled:   true
	I0920 21:50:03.522710   47651 command_runner.go:130] > AppArmorEnabled:  false
	I0920 21:50:03.522852   47651 ssh_runner.go:195] Run: crio --version
	I0920 21:50:03.552126   47651 command_runner.go:130] > crio version 1.29.1
	I0920 21:50:03.552159   47651 command_runner.go:130] > Version:        1.29.1
	I0920 21:50:03.552167   47651 command_runner.go:130] > GitCommit:      unknown
	I0920 21:50:03.552178   47651 command_runner.go:130] > GitCommitDate:  unknown
	I0920 21:50:03.552184   47651 command_runner.go:130] > GitTreeState:   clean
	I0920 21:50:03.552196   47651 command_runner.go:130] > BuildDate:      2024-09-20T03:55:27Z
	I0920 21:50:03.552200   47651 command_runner.go:130] > GoVersion:      go1.21.6
	I0920 21:50:03.552204   47651 command_runner.go:130] > Compiler:       gc
	I0920 21:50:03.552208   47651 command_runner.go:130] > Platform:       linux/amd64
	I0920 21:50:03.552212   47651 command_runner.go:130] > Linkmode:       dynamic
	I0920 21:50:03.552217   47651 command_runner.go:130] > BuildTags:      
	I0920 21:50:03.552221   47651 command_runner.go:130] >   containers_image_ostree_stub
	I0920 21:50:03.552226   47651 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0920 21:50:03.552231   47651 command_runner.go:130] >   btrfs_noversion
	I0920 21:50:03.552235   47651 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0920 21:50:03.552239   47651 command_runner.go:130] >   libdm_no_deferred_remove
	I0920 21:50:03.552242   47651 command_runner.go:130] >   seccomp
	I0920 21:50:03.552246   47651 command_runner.go:130] > LDFlags:          unknown
	I0920 21:50:03.552250   47651 command_runner.go:130] > SeccompEnabled:   true
	I0920 21:50:03.552255   47651 command_runner.go:130] > AppArmorEnabled:  false
	I0920 21:50:03.555719   47651 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 21:50:03.556988   47651 main.go:141] libmachine: (multinode-900777) Calling .GetIP
	I0920 21:50:03.560063   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:50:03.560454   47651 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:50:03.560481   47651 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:50:03.560734   47651 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 21:50:03.565628   47651 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0920 21:50:03.565733   47651 kubeadm.go:883] updating cluster {Name:multinode-900777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-900777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget
:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 21:50:03.565873   47651 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 21:50:03.565927   47651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:50:03.612451   47651 command_runner.go:130] > {
	I0920 21:50:03.612471   47651 command_runner.go:130] >   "images": [
	I0920 21:50:03.612475   47651 command_runner.go:130] >     {
	I0920 21:50:03.612482   47651 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 21:50:03.612487   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.612492   47651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 21:50:03.612495   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612499   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.612510   47651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 21:50:03.612521   47651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 21:50:03.612527   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612533   47651 command_runner.go:130] >       "size": "87190579",
	I0920 21:50:03.612543   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.612548   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.612558   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.612568   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.612574   47651 command_runner.go:130] >     },
	I0920 21:50:03.612578   47651 command_runner.go:130] >     {
	I0920 21:50:03.612584   47651 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 21:50:03.612590   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.612595   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 21:50:03.612599   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612604   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.612613   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 21:50:03.612620   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 21:50:03.612626   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612633   47651 command_runner.go:130] >       "size": "1363676",
	I0920 21:50:03.612642   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.612653   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.612665   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.612672   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.612675   47651 command_runner.go:130] >     },
	I0920 21:50:03.612679   47651 command_runner.go:130] >     {
	I0920 21:50:03.612684   47651 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 21:50:03.612691   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.612696   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 21:50:03.612702   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612706   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.612716   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 21:50:03.612731   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 21:50:03.612740   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612747   47651 command_runner.go:130] >       "size": "31470524",
	I0920 21:50:03.612759   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.612773   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.612782   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.612791   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.612794   47651 command_runner.go:130] >     },
	I0920 21:50:03.612800   47651 command_runner.go:130] >     {
	I0920 21:50:03.612806   47651 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 21:50:03.612812   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.612817   47651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 21:50:03.612823   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612829   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.612845   47651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 21:50:03.612896   47651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 21:50:03.612905   47651 command_runner.go:130] >       ],
	I0920 21:50:03.612909   47651 command_runner.go:130] >       "size": "63273227",
	I0920 21:50:03.612915   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.612921   47651 command_runner.go:130] >       "username": "nonroot",
	I0920 21:50:03.612941   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.612952   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.612961   47651 command_runner.go:130] >     },
	I0920 21:50:03.612976   47651 command_runner.go:130] >     {
	I0920 21:50:03.612989   47651 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 21:50:03.613005   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.613014   47651 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 21:50:03.613022   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613031   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.613043   47651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 21:50:03.613058   47651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 21:50:03.613066   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613076   47651 command_runner.go:130] >       "size": "149009664",
	I0920 21:50:03.613084   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.613093   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.613100   47651 command_runner.go:130] >       },
	I0920 21:50:03.613104   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.613113   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.613121   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.613127   47651 command_runner.go:130] >     },
	I0920 21:50:03.613135   47651 command_runner.go:130] >     {
	I0920 21:50:03.613147   47651 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 21:50:03.613156   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.613167   47651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 21:50:03.613176   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613185   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.613196   47651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 21:50:03.613208   47651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 21:50:03.613218   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613224   47651 command_runner.go:130] >       "size": "95237600",
	I0920 21:50:03.613232   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.613242   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.613250   47651 command_runner.go:130] >       },
	I0920 21:50:03.613259   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.613268   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.613277   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.613291   47651 command_runner.go:130] >     },
	I0920 21:50:03.613299   47651 command_runner.go:130] >     {
	I0920 21:50:03.613310   47651 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 21:50:03.613322   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.613334   47651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 21:50:03.613342   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613351   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.613366   47651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 21:50:03.613381   47651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 21:50:03.613388   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613393   47651 command_runner.go:130] >       "size": "89437508",
	I0920 21:50:03.613401   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.613411   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.613419   47651 command_runner.go:130] >       },
	I0920 21:50:03.613428   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.613436   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.613446   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.613453   47651 command_runner.go:130] >     },
	I0920 21:50:03.613459   47651 command_runner.go:130] >     {
	I0920 21:50:03.613471   47651 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 21:50:03.613477   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.613483   47651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 21:50:03.613491   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613497   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.613519   47651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 21:50:03.613534   47651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 21:50:03.613543   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613549   47651 command_runner.go:130] >       "size": "92733849",
	I0920 21:50:03.613558   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.613565   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.613573   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.613578   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.613581   47651 command_runner.go:130] >     },
	I0920 21:50:03.613585   47651 command_runner.go:130] >     {
	I0920 21:50:03.613594   47651 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 21:50:03.613602   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.613610   47651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 21:50:03.613615   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613622   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.613636   47651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 21:50:03.613648   47651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 21:50:03.613653   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613660   47651 command_runner.go:130] >       "size": "68420934",
	I0920 21:50:03.613663   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.613667   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.613670   47651 command_runner.go:130] >       },
	I0920 21:50:03.613676   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.613682   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.613688   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.613694   47651 command_runner.go:130] >     },
	I0920 21:50:03.613700   47651 command_runner.go:130] >     {
	I0920 21:50:03.613711   47651 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 21:50:03.613720   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.613728   47651 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 21:50:03.613736   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613743   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.613753   47651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 21:50:03.613762   47651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 21:50:03.613770   47651 command_runner.go:130] >       ],
	I0920 21:50:03.613777   47651 command_runner.go:130] >       "size": "742080",
	I0920 21:50:03.613786   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.613793   47651 command_runner.go:130] >         "value": "65535"
	I0920 21:50:03.613801   47651 command_runner.go:130] >       },
	I0920 21:50:03.613807   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.613818   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.613827   47651 command_runner.go:130] >       "pinned": true
	I0920 21:50:03.613835   47651 command_runner.go:130] >     }
	I0920 21:50:03.613839   47651 command_runner.go:130] >   ]
	I0920 21:50:03.613843   47651 command_runner.go:130] > }
	I0920 21:50:03.614023   47651 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 21:50:03.614035   47651 crio.go:433] Images already preloaded, skipping extraction
	I0920 21:50:03.614085   47651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 21:50:03.648366   47651 command_runner.go:130] > {
	I0920 21:50:03.648390   47651 command_runner.go:130] >   "images": [
	I0920 21:50:03.648394   47651 command_runner.go:130] >     {
	I0920 21:50:03.648402   47651 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0920 21:50:03.648406   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.648412   47651 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0920 21:50:03.648416   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648420   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.648428   47651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I0920 21:50:03.648438   47651 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0920 21:50:03.648446   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648453   47651 command_runner.go:130] >       "size": "87190579",
	I0920 21:50:03.648462   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.648471   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.648496   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.648506   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.648511   47651 command_runner.go:130] >     },
	I0920 21:50:03.648516   47651 command_runner.go:130] >     {
	I0920 21:50:03.648524   47651 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0920 21:50:03.648533   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.648540   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0920 21:50:03.648545   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648553   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.648564   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0920 21:50:03.648577   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0920 21:50:03.648585   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648594   47651 command_runner.go:130] >       "size": "1363676",
	I0920 21:50:03.648605   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.648613   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.648622   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.648631   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.648638   47651 command_runner.go:130] >     },
	I0920 21:50:03.648647   47651 command_runner.go:130] >     {
	I0920 21:50:03.648657   47651 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0920 21:50:03.648663   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.648668   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0920 21:50:03.648674   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648678   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.648687   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0920 21:50:03.648697   47651 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0920 21:50:03.648703   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648707   47651 command_runner.go:130] >       "size": "31470524",
	I0920 21:50:03.648713   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.648718   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.648724   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.648727   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.648733   47651 command_runner.go:130] >     },
	I0920 21:50:03.648737   47651 command_runner.go:130] >     {
	I0920 21:50:03.648743   47651 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0920 21:50:03.648750   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.648762   47651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0920 21:50:03.648766   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648769   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.648778   47651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I0920 21:50:03.648790   47651 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I0920 21:50:03.648796   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648801   47651 command_runner.go:130] >       "size": "63273227",
	I0920 21:50:03.648807   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.648811   47651 command_runner.go:130] >       "username": "nonroot",
	I0920 21:50:03.648821   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.648828   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.648832   47651 command_runner.go:130] >     },
	I0920 21:50:03.648837   47651 command_runner.go:130] >     {
	I0920 21:50:03.648843   47651 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0920 21:50:03.648848   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.648853   47651 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0920 21:50:03.648858   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648862   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.648870   47651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I0920 21:50:03.648877   47651 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0920 21:50:03.648882   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648887   47651 command_runner.go:130] >       "size": "149009664",
	I0920 21:50:03.648892   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.648896   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.648902   47651 command_runner.go:130] >       },
	I0920 21:50:03.648905   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.648912   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.648916   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.648922   47651 command_runner.go:130] >     },
	I0920 21:50:03.648925   47651 command_runner.go:130] >     {
	I0920 21:50:03.648933   47651 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0920 21:50:03.648938   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.648944   47651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0920 21:50:03.648949   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648953   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.648963   47651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I0920 21:50:03.648972   47651 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0920 21:50:03.648978   47651 command_runner.go:130] >       ],
	I0920 21:50:03.648982   47651 command_runner.go:130] >       "size": "95237600",
	I0920 21:50:03.648988   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.648991   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.648997   47651 command_runner.go:130] >       },
	I0920 21:50:03.649003   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.649010   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.649014   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.649020   47651 command_runner.go:130] >     },
	I0920 21:50:03.649024   47651 command_runner.go:130] >     {
	I0920 21:50:03.649032   47651 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0920 21:50:03.649035   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.649046   47651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0920 21:50:03.649052   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649056   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.649065   47651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I0920 21:50:03.649075   47651 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I0920 21:50:03.649084   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649090   47651 command_runner.go:130] >       "size": "89437508",
	I0920 21:50:03.649094   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.649099   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.649102   47651 command_runner.go:130] >       },
	I0920 21:50:03.649106   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.649110   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.649115   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.649120   47651 command_runner.go:130] >     },
	I0920 21:50:03.649123   47651 command_runner.go:130] >     {
	I0920 21:50:03.649129   47651 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0920 21:50:03.649135   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.649139   47651 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0920 21:50:03.649145   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649149   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.649164   47651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I0920 21:50:03.649173   47651 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I0920 21:50:03.649178   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649182   47651 command_runner.go:130] >       "size": "92733849",
	I0920 21:50:03.649188   47651 command_runner.go:130] >       "uid": null,
	I0920 21:50:03.649192   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.649198   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.649202   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.649207   47651 command_runner.go:130] >     },
	I0920 21:50:03.649210   47651 command_runner.go:130] >     {
	I0920 21:50:03.649216   47651 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0920 21:50:03.649222   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.649227   47651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0920 21:50:03.649232   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649236   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.649246   47651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I0920 21:50:03.649262   47651 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I0920 21:50:03.649268   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649272   47651 command_runner.go:130] >       "size": "68420934",
	I0920 21:50:03.649277   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.649281   47651 command_runner.go:130] >         "value": "0"
	I0920 21:50:03.649285   47651 command_runner.go:130] >       },
	I0920 21:50:03.649291   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.649294   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.649298   47651 command_runner.go:130] >       "pinned": false
	I0920 21:50:03.649303   47651 command_runner.go:130] >     },
	I0920 21:50:03.649307   47651 command_runner.go:130] >     {
	I0920 21:50:03.649315   47651 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0920 21:50:03.649321   47651 command_runner.go:130] >       "repoTags": [
	I0920 21:50:03.649325   47651 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0920 21:50:03.649330   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649334   47651 command_runner.go:130] >       "repoDigests": [
	I0920 21:50:03.649343   47651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I0920 21:50:03.649355   47651 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0920 21:50:03.649360   47651 command_runner.go:130] >       ],
	I0920 21:50:03.649364   47651 command_runner.go:130] >       "size": "742080",
	I0920 21:50:03.649369   47651 command_runner.go:130] >       "uid": {
	I0920 21:50:03.649374   47651 command_runner.go:130] >         "value": "65535"
	I0920 21:50:03.649379   47651 command_runner.go:130] >       },
	I0920 21:50:03.649383   47651 command_runner.go:130] >       "username": "",
	I0920 21:50:03.649389   47651 command_runner.go:130] >       "spec": null,
	I0920 21:50:03.649394   47651 command_runner.go:130] >       "pinned": true
	I0920 21:50:03.649399   47651 command_runner.go:130] >     }
	I0920 21:50:03.649403   47651 command_runner.go:130] >   ]
	I0920 21:50:03.649408   47651 command_runner.go:130] > }
	I0920 21:50:03.649549   47651 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 21:50:03.649562   47651 cache_images.go:84] Images are preloaded, skipping loading
	I0920 21:50:03.649569   47651 kubeadm.go:934] updating node { 192.168.39.150 8443 v1.31.1 crio true true} ...
	I0920 21:50:03.649656   47651 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-900777 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.150
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-900777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 21:50:03.649723   47651 ssh_runner.go:195] Run: crio config
	I0920 21:50:03.685818   47651 command_runner.go:130] ! time="2024-09-20 21:50:03.654073911Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0920 21:50:03.691420   47651 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0920 21:50:03.698856   47651 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0920 21:50:03.698886   47651 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0920 21:50:03.698894   47651 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0920 21:50:03.698898   47651 command_runner.go:130] > #
	I0920 21:50:03.698904   47651 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0920 21:50:03.698910   47651 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0920 21:50:03.698916   47651 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0920 21:50:03.698922   47651 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0920 21:50:03.698926   47651 command_runner.go:130] > # reload'.
	I0920 21:50:03.698932   47651 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0920 21:50:03.698938   47651 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0920 21:50:03.698944   47651 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0920 21:50:03.698949   47651 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0920 21:50:03.698976   47651 command_runner.go:130] > [crio]
	I0920 21:50:03.698990   47651 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0920 21:50:03.699001   47651 command_runner.go:130] > # containers images, in this directory.
	I0920 21:50:03.699011   47651 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0920 21:50:03.699019   47651 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0920 21:50:03.699025   47651 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0920 21:50:03.699032   47651 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0920 21:50:03.699039   47651 command_runner.go:130] > # imagestore = ""
	I0920 21:50:03.699045   47651 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0920 21:50:03.699053   47651 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0920 21:50:03.699057   47651 command_runner.go:130] > storage_driver = "overlay"
	I0920 21:50:03.699063   47651 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0920 21:50:03.699069   47651 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0920 21:50:03.699073   47651 command_runner.go:130] > storage_option = [
	I0920 21:50:03.699078   47651 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0920 21:50:03.699083   47651 command_runner.go:130] > ]
	I0920 21:50:03.699089   47651 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0920 21:50:03.699095   47651 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0920 21:50:03.699102   47651 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0920 21:50:03.699108   47651 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0920 21:50:03.699116   47651 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0920 21:50:03.699121   47651 command_runner.go:130] > # always happen on a node reboot
	I0920 21:50:03.699125   47651 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0920 21:50:03.699136   47651 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0920 21:50:03.699143   47651 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0920 21:50:03.699149   47651 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0920 21:50:03.699154   47651 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0920 21:50:03.699164   47651 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0920 21:50:03.699173   47651 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0920 21:50:03.699177   47651 command_runner.go:130] > # internal_wipe = true
	I0920 21:50:03.699187   47651 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0920 21:50:03.699194   47651 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0920 21:50:03.699198   47651 command_runner.go:130] > # internal_repair = false
	I0920 21:50:03.699203   47651 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0920 21:50:03.699215   47651 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0920 21:50:03.699223   47651 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0920 21:50:03.699230   47651 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0920 21:50:03.699239   47651 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0920 21:50:03.699245   47651 command_runner.go:130] > [crio.api]
	I0920 21:50:03.699250   47651 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0920 21:50:03.699257   47651 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0920 21:50:03.699262   47651 command_runner.go:130] > # IP address on which the stream server will listen.
	I0920 21:50:03.699267   47651 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0920 21:50:03.699274   47651 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0920 21:50:03.699280   47651 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0920 21:50:03.699284   47651 command_runner.go:130] > # stream_port = "0"
	I0920 21:50:03.699291   47651 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0920 21:50:03.699296   47651 command_runner.go:130] > # stream_enable_tls = false
	I0920 21:50:03.699304   47651 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0920 21:50:03.699310   47651 command_runner.go:130] > # stream_idle_timeout = ""
	I0920 21:50:03.699316   47651 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0920 21:50:03.699324   47651 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0920 21:50:03.699330   47651 command_runner.go:130] > # minutes.
	I0920 21:50:03.699334   47651 command_runner.go:130] > # stream_tls_cert = ""
	I0920 21:50:03.699341   47651 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0920 21:50:03.699347   47651 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0920 21:50:03.699353   47651 command_runner.go:130] > # stream_tls_key = ""
	I0920 21:50:03.699359   47651 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0920 21:50:03.699367   47651 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0920 21:50:03.699379   47651 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0920 21:50:03.699385   47651 command_runner.go:130] > # stream_tls_ca = ""
	I0920 21:50:03.699392   47651 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 21:50:03.699398   47651 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0920 21:50:03.699405   47651 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0920 21:50:03.699412   47651 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0920 21:50:03.699418   47651 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0920 21:50:03.699425   47651 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0920 21:50:03.699429   47651 command_runner.go:130] > [crio.runtime]
	I0920 21:50:03.699435   47651 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0920 21:50:03.699442   47651 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0920 21:50:03.699446   47651 command_runner.go:130] > # "nofile=1024:2048"
	I0920 21:50:03.699454   47651 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0920 21:50:03.699458   47651 command_runner.go:130] > # default_ulimits = [
	I0920 21:50:03.699464   47651 command_runner.go:130] > # ]
	I0920 21:50:03.699469   47651 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0920 21:50:03.699475   47651 command_runner.go:130] > # no_pivot = false
	I0920 21:50:03.699484   47651 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0920 21:50:03.699492   47651 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0920 21:50:03.699497   47651 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0920 21:50:03.699505   47651 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0920 21:50:03.699509   47651 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0920 21:50:03.699516   47651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 21:50:03.699521   47651 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0920 21:50:03.699528   47651 command_runner.go:130] > # Cgroup setting for conmon
	I0920 21:50:03.699534   47651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0920 21:50:03.699540   47651 command_runner.go:130] > conmon_cgroup = "pod"
	I0920 21:50:03.699546   47651 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0920 21:50:03.699553   47651 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0920 21:50:03.699559   47651 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0920 21:50:03.699565   47651 command_runner.go:130] > conmon_env = [
	I0920 21:50:03.699571   47651 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 21:50:03.699576   47651 command_runner.go:130] > ]
	I0920 21:50:03.699581   47651 command_runner.go:130] > # Additional environment variables to set for all the
	I0920 21:50:03.699588   47651 command_runner.go:130] > # containers. These are overridden if set in the
	I0920 21:50:03.699593   47651 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0920 21:50:03.699599   47651 command_runner.go:130] > # default_env = [
	I0920 21:50:03.699602   47651 command_runner.go:130] > # ]
	I0920 21:50:03.699608   47651 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0920 21:50:03.699617   47651 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0920 21:50:03.699624   47651 command_runner.go:130] > # selinux = false
	I0920 21:50:03.699630   47651 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0920 21:50:03.699638   47651 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0920 21:50:03.699646   47651 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0920 21:50:03.699650   47651 command_runner.go:130] > # seccomp_profile = ""
	I0920 21:50:03.699656   47651 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0920 21:50:03.699664   47651 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0920 21:50:03.699671   47651 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0920 21:50:03.699677   47651 command_runner.go:130] > # which might increase security.
	I0920 21:50:03.699684   47651 command_runner.go:130] > # This option is currently deprecated,
	I0920 21:50:03.699690   47651 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0920 21:50:03.699696   47651 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0920 21:50:03.699701   47651 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0920 21:50:03.699709   47651 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0920 21:50:03.699718   47651 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0920 21:50:03.699726   47651 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0920 21:50:03.699733   47651 command_runner.go:130] > # This option supports live configuration reload.
	I0920 21:50:03.699737   47651 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0920 21:50:03.699745   47651 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0920 21:50:03.699751   47651 command_runner.go:130] > # the cgroup blockio controller.
	I0920 21:50:03.699757   47651 command_runner.go:130] > # blockio_config_file = ""
	I0920 21:50:03.699765   47651 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0920 21:50:03.699771   47651 command_runner.go:130] > # blockio parameters.
	I0920 21:50:03.699775   47651 command_runner.go:130] > # blockio_reload = false
	I0920 21:50:03.699781   47651 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0920 21:50:03.699787   47651 command_runner.go:130] > # irqbalance daemon.
	I0920 21:50:03.699792   47651 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0920 21:50:03.699800   47651 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0920 21:50:03.699809   47651 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0920 21:50:03.699817   47651 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0920 21:50:03.699823   47651 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0920 21:50:03.699831   47651 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0920 21:50:03.699837   47651 command_runner.go:130] > # This option supports live configuration reload.
	I0920 21:50:03.699841   47651 command_runner.go:130] > # rdt_config_file = ""
	I0920 21:50:03.699848   47651 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0920 21:50:03.699853   47651 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0920 21:50:03.699869   47651 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0920 21:50:03.699876   47651 command_runner.go:130] > # separate_pull_cgroup = ""
	I0920 21:50:03.699882   47651 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0920 21:50:03.699891   47651 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0920 21:50:03.699896   47651 command_runner.go:130] > # will be added.
	I0920 21:50:03.699900   47651 command_runner.go:130] > # default_capabilities = [
	I0920 21:50:03.699907   47651 command_runner.go:130] > # 	"CHOWN",
	I0920 21:50:03.699911   47651 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0920 21:50:03.699917   47651 command_runner.go:130] > # 	"FSETID",
	I0920 21:50:03.699920   47651 command_runner.go:130] > # 	"FOWNER",
	I0920 21:50:03.699926   47651 command_runner.go:130] > # 	"SETGID",
	I0920 21:50:03.699929   47651 command_runner.go:130] > # 	"SETUID",
	I0920 21:50:03.699935   47651 command_runner.go:130] > # 	"SETPCAP",
	I0920 21:50:03.699939   47651 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0920 21:50:03.699946   47651 command_runner.go:130] > # 	"KILL",
	I0920 21:50:03.699949   47651 command_runner.go:130] > # ]
	I0920 21:50:03.699956   47651 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0920 21:50:03.699968   47651 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0920 21:50:03.699974   47651 command_runner.go:130] > # add_inheritable_capabilities = false
	I0920 21:50:03.699983   47651 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0920 21:50:03.699991   47651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 21:50:03.699996   47651 command_runner.go:130] > default_sysctls = [
	I0920 21:50:03.700003   47651 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0920 21:50:03.700006   47651 command_runner.go:130] > ]
	I0920 21:50:03.700011   47651 command_runner.go:130] > # List of devices on the host that a
	I0920 21:50:03.700019   47651 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0920 21:50:03.700024   47651 command_runner.go:130] > # allowed_devices = [
	I0920 21:50:03.700028   47651 command_runner.go:130] > # 	"/dev/fuse",
	I0920 21:50:03.700034   47651 command_runner.go:130] > # ]
	I0920 21:50:03.700038   47651 command_runner.go:130] > # List of additional devices. specified as
	I0920 21:50:03.700047   47651 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0920 21:50:03.700055   47651 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0920 21:50:03.700068   47651 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0920 21:50:03.700075   47651 command_runner.go:130] > # additional_devices = [
	I0920 21:50:03.700079   47651 command_runner.go:130] > # ]
	I0920 21:50:03.700086   47651 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0920 21:50:03.700093   47651 command_runner.go:130] > # cdi_spec_dirs = [
	I0920 21:50:03.700097   47651 command_runner.go:130] > # 	"/etc/cdi",
	I0920 21:50:03.700103   47651 command_runner.go:130] > # 	"/var/run/cdi",
	I0920 21:50:03.700106   47651 command_runner.go:130] > # ]
	I0920 21:50:03.700112   47651 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0920 21:50:03.700120   47651 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0920 21:50:03.700125   47651 command_runner.go:130] > # Defaults to false.
	I0920 21:50:03.700130   47651 command_runner.go:130] > # device_ownership_from_security_context = false
	I0920 21:50:03.700138   47651 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0920 21:50:03.700146   47651 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0920 21:50:03.700152   47651 command_runner.go:130] > # hooks_dir = [
	I0920 21:50:03.700156   47651 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0920 21:50:03.700162   47651 command_runner.go:130] > # ]
	I0920 21:50:03.700168   47651 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0920 21:50:03.700176   47651 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0920 21:50:03.700183   47651 command_runner.go:130] > # its default mounts from the following two files:
	I0920 21:50:03.700186   47651 command_runner.go:130] > #
	I0920 21:50:03.700192   47651 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0920 21:50:03.700200   47651 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0920 21:50:03.700213   47651 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0920 21:50:03.700218   47651 command_runner.go:130] > #
	I0920 21:50:03.700223   47651 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0920 21:50:03.700232   47651 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0920 21:50:03.700240   47651 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0920 21:50:03.700249   47651 command_runner.go:130] > #      only add mounts it finds in this file.
	I0920 21:50:03.700255   47651 command_runner.go:130] > #
	I0920 21:50:03.700259   47651 command_runner.go:130] > # default_mounts_file = ""
	I0920 21:50:03.700268   47651 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0920 21:50:03.700274   47651 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0920 21:50:03.700281   47651 command_runner.go:130] > pids_limit = 1024
	I0920 21:50:03.700287   47651 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0920 21:50:03.700295   47651 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0920 21:50:03.700301   47651 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0920 21:50:03.700311   47651 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0920 21:50:03.700317   47651 command_runner.go:130] > # log_size_max = -1
	I0920 21:50:03.700324   47651 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0920 21:50:03.700330   47651 command_runner.go:130] > # log_to_journald = false
	I0920 21:50:03.700338   47651 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0920 21:50:03.700346   47651 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0920 21:50:03.700351   47651 command_runner.go:130] > # Path to directory for container attach sockets.
	I0920 21:50:03.700358   47651 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0920 21:50:03.700363   47651 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0920 21:50:03.700370   47651 command_runner.go:130] > # bind_mount_prefix = ""
	I0920 21:50:03.700375   47651 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0920 21:50:03.700381   47651 command_runner.go:130] > # read_only = false
	I0920 21:50:03.700387   47651 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0920 21:50:03.700395   47651 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0920 21:50:03.700399   47651 command_runner.go:130] > # live configuration reload.
	I0920 21:50:03.700403   47651 command_runner.go:130] > # log_level = "info"
	I0920 21:50:03.700411   47651 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0920 21:50:03.700418   47651 command_runner.go:130] > # This option supports live configuration reload.
	I0920 21:50:03.700422   47651 command_runner.go:130] > # log_filter = ""
	I0920 21:50:03.700427   47651 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0920 21:50:03.700436   47651 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0920 21:50:03.700440   47651 command_runner.go:130] > # separated by comma.
	I0920 21:50:03.700449   47651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 21:50:03.700453   47651 command_runner.go:130] > # uid_mappings = ""
	I0920 21:50:03.700459   47651 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0920 21:50:03.700473   47651 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0920 21:50:03.700478   47651 command_runner.go:130] > # separated by comma.
	I0920 21:50:03.700485   47651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 21:50:03.700494   47651 command_runner.go:130] > # gid_mappings = ""
	I0920 21:50:03.700501   47651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0920 21:50:03.700510   47651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 21:50:03.700516   47651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 21:50:03.700525   47651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 21:50:03.700531   47651 command_runner.go:130] > # minimum_mappable_uid = -1
	I0920 21:50:03.700537   47651 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0920 21:50:03.700545   47651 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0920 21:50:03.700550   47651 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0920 21:50:03.700559   47651 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0920 21:50:03.700565   47651 command_runner.go:130] > # minimum_mappable_gid = -1
	I0920 21:50:03.700571   47651 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0920 21:50:03.700579   47651 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0920 21:50:03.700592   47651 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0920 21:50:03.700598   47651 command_runner.go:130] > # ctr_stop_timeout = 30
	I0920 21:50:03.700603   47651 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0920 21:50:03.700611   47651 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0920 21:50:03.700616   47651 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0920 21:50:03.700620   47651 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0920 21:50:03.700626   47651 command_runner.go:130] > drop_infra_ctr = false
	I0920 21:50:03.700633   47651 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0920 21:50:03.700640   47651 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0920 21:50:03.700648   47651 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0920 21:50:03.700654   47651 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0920 21:50:03.700661   47651 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0920 21:50:03.700668   47651 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0920 21:50:03.700676   47651 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0920 21:50:03.700681   47651 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0920 21:50:03.700688   47651 command_runner.go:130] > # shared_cpuset = ""
	I0920 21:50:03.700694   47651 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0920 21:50:03.700701   47651 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0920 21:50:03.700705   47651 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0920 21:50:03.700714   47651 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0920 21:50:03.700720   47651 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0920 21:50:03.700727   47651 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0920 21:50:03.700737   47651 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0920 21:50:03.700743   47651 command_runner.go:130] > # enable_criu_support = false
	I0920 21:50:03.700748   47651 command_runner.go:130] > # Enable/disable the generation of the container,
	I0920 21:50:03.700755   47651 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0920 21:50:03.700761   47651 command_runner.go:130] > # enable_pod_events = false
	I0920 21:50:03.700767   47651 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 21:50:03.700776   47651 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0920 21:50:03.700784   47651 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0920 21:50:03.700788   47651 command_runner.go:130] > # default_runtime = "runc"
	I0920 21:50:03.700793   47651 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0920 21:50:03.700800   47651 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0920 21:50:03.700810   47651 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0920 21:50:03.700817   47651 command_runner.go:130] > # creation as a file is not desired either.
	I0920 21:50:03.700825   47651 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0920 21:50:03.700832   47651 command_runner.go:130] > # the hostname is being managed dynamically.
	I0920 21:50:03.700836   47651 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0920 21:50:03.700840   47651 command_runner.go:130] > # ]
	I0920 21:50:03.700846   47651 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0920 21:50:03.700854   47651 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0920 21:50:03.700862   47651 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0920 21:50:03.700866   47651 command_runner.go:130] > # Each entry in the table should follow the format:
	I0920 21:50:03.700872   47651 command_runner.go:130] > #
	I0920 21:50:03.700876   47651 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0920 21:50:03.700883   47651 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0920 21:50:03.700901   47651 command_runner.go:130] > # runtime_type = "oci"
	I0920 21:50:03.700908   47651 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0920 21:50:03.700913   47651 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0920 21:50:03.700919   47651 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0920 21:50:03.700924   47651 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0920 21:50:03.700930   47651 command_runner.go:130] > # monitor_env = []
	I0920 21:50:03.700934   47651 command_runner.go:130] > # privileged_without_host_devices = false
	I0920 21:50:03.700940   47651 command_runner.go:130] > # allowed_annotations = []
	I0920 21:50:03.700946   47651 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0920 21:50:03.700951   47651 command_runner.go:130] > # Where:
	I0920 21:50:03.700957   47651 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0920 21:50:03.700966   47651 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0920 21:50:03.700975   47651 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0920 21:50:03.700982   47651 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0920 21:50:03.700991   47651 command_runner.go:130] > #   in $PATH.
	I0920 21:50:03.701001   47651 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0920 21:50:03.701007   47651 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0920 21:50:03.701014   47651 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0920 21:50:03.701019   47651 command_runner.go:130] > #   state.
	I0920 21:50:03.701025   47651 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0920 21:50:03.701033   47651 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0920 21:50:03.701042   47651 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0920 21:50:03.701047   47651 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0920 21:50:03.701055   47651 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0920 21:50:03.701062   47651 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0920 21:50:03.701069   47651 command_runner.go:130] > #   The currently recognized values are:
	I0920 21:50:03.701075   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0920 21:50:03.701084   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0920 21:50:03.701092   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0920 21:50:03.701099   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0920 21:50:03.701108   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0920 21:50:03.701114   47651 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0920 21:50:03.701122   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0920 21:50:03.701130   47651 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0920 21:50:03.701138   47651 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0920 21:50:03.701146   47651 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0920 21:50:03.701152   47651 command_runner.go:130] > #   deprecated option "conmon".
	I0920 21:50:03.701158   47651 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0920 21:50:03.701165   47651 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0920 21:50:03.701171   47651 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0920 21:50:03.701178   47651 command_runner.go:130] > #   should be moved to the container's cgroup
	I0920 21:50:03.701185   47651 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0920 21:50:03.701192   47651 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0920 21:50:03.701198   47651 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0920 21:50:03.701203   47651 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0920 21:50:03.701211   47651 command_runner.go:130] > #
	I0920 21:50:03.701216   47651 command_runner.go:130] > # Using the seccomp notifier feature:
	I0920 21:50:03.701223   47651 command_runner.go:130] > #
	I0920 21:50:03.701229   47651 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0920 21:50:03.701237   47651 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0920 21:50:03.701242   47651 command_runner.go:130] > #
	I0920 21:50:03.701248   47651 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0920 21:50:03.701256   47651 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0920 21:50:03.701261   47651 command_runner.go:130] > #
	I0920 21:50:03.701267   47651 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0920 21:50:03.701272   47651 command_runner.go:130] > # feature.
	I0920 21:50:03.701275   47651 command_runner.go:130] > #
	I0920 21:50:03.701281   47651 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0920 21:50:03.701289   47651 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0920 21:50:03.701296   47651 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0920 21:50:03.701304   47651 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0920 21:50:03.701312   47651 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0920 21:50:03.701317   47651 command_runner.go:130] > #
	I0920 21:50:03.701323   47651 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0920 21:50:03.701330   47651 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0920 21:50:03.701336   47651 command_runner.go:130] > #
	I0920 21:50:03.701341   47651 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0920 21:50:03.701349   47651 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0920 21:50:03.701354   47651 command_runner.go:130] > #
	I0920 21:50:03.701360   47651 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0920 21:50:03.701367   47651 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0920 21:50:03.701371   47651 command_runner.go:130] > # limitation.
	I0920 21:50:03.701376   47651 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0920 21:50:03.701383   47651 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0920 21:50:03.701387   47651 command_runner.go:130] > runtime_type = "oci"
	I0920 21:50:03.701393   47651 command_runner.go:130] > runtime_root = "/run/runc"
	I0920 21:50:03.701397   47651 command_runner.go:130] > runtime_config_path = ""
	I0920 21:50:03.701403   47651 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0920 21:50:03.701408   47651 command_runner.go:130] > monitor_cgroup = "pod"
	I0920 21:50:03.701413   47651 command_runner.go:130] > monitor_exec_cgroup = ""
	I0920 21:50:03.701417   47651 command_runner.go:130] > monitor_env = [
	I0920 21:50:03.701425   47651 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0920 21:50:03.701431   47651 command_runner.go:130] > ]
	I0920 21:50:03.701436   47651 command_runner.go:130] > privileged_without_host_devices = false
	I0920 21:50:03.701442   47651 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0920 21:50:03.701449   47651 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0920 21:50:03.701454   47651 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0920 21:50:03.701462   47651 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0920 21:50:03.701473   47651 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0920 21:50:03.701480   47651 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0920 21:50:03.701489   47651 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0920 21:50:03.701498   47651 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0920 21:50:03.701504   47651 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0920 21:50:03.701513   47651 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0920 21:50:03.701516   47651 command_runner.go:130] > # Example:
	I0920 21:50:03.701521   47651 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0920 21:50:03.701528   47651 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0920 21:50:03.701533   47651 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0920 21:50:03.701537   47651 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0920 21:50:03.701542   47651 command_runner.go:130] > # cpuset = 0
	I0920 21:50:03.701546   47651 command_runner.go:130] > # cpushares = "0-1"
	I0920 21:50:03.701552   47651 command_runner.go:130] > # Where:
	I0920 21:50:03.701556   47651 command_runner.go:130] > # The workload name is workload-type.
	I0920 21:50:03.701565   47651 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0920 21:50:03.701572   47651 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0920 21:50:03.701577   47651 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0920 21:50:03.701586   47651 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0920 21:50:03.701594   47651 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0920 21:50:03.701600   47651 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0920 21:50:03.701606   47651 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0920 21:50:03.701612   47651 command_runner.go:130] > # Default value is set to true
	I0920 21:50:03.701617   47651 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0920 21:50:03.701624   47651 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0920 21:50:03.701630   47651 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0920 21:50:03.701634   47651 command_runner.go:130] > # Default value is set to 'false'
	I0920 21:50:03.701641   47651 command_runner.go:130] > # disable_hostport_mapping = false
	I0920 21:50:03.701647   47651 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0920 21:50:03.701650   47651 command_runner.go:130] > #
	I0920 21:50:03.701655   47651 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0920 21:50:03.701661   47651 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0920 21:50:03.701666   47651 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0920 21:50:03.701672   47651 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0920 21:50:03.701681   47651 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0920 21:50:03.701684   47651 command_runner.go:130] > [crio.image]
	I0920 21:50:03.701690   47651 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0920 21:50:03.701694   47651 command_runner.go:130] > # default_transport = "docker://"
	I0920 21:50:03.701699   47651 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0920 21:50:03.701705   47651 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0920 21:50:03.701708   47651 command_runner.go:130] > # global_auth_file = ""
	I0920 21:50:03.701713   47651 command_runner.go:130] > # The image used to instantiate infra containers.
	I0920 21:50:03.701717   47651 command_runner.go:130] > # This option supports live configuration reload.
	I0920 21:50:03.701721   47651 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I0920 21:50:03.701727   47651 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0920 21:50:03.701732   47651 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0920 21:50:03.701736   47651 command_runner.go:130] > # This option supports live configuration reload.
	I0920 21:50:03.701740   47651 command_runner.go:130] > # pause_image_auth_file = ""
	I0920 21:50:03.701745   47651 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0920 21:50:03.701751   47651 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0920 21:50:03.701756   47651 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0920 21:50:03.701761   47651 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0920 21:50:03.701765   47651 command_runner.go:130] > # pause_command = "/pause"
	I0920 21:50:03.701770   47651 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0920 21:50:03.701776   47651 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0920 21:50:03.701780   47651 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0920 21:50:03.701787   47651 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0920 21:50:03.701792   47651 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0920 21:50:03.701797   47651 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0920 21:50:03.701801   47651 command_runner.go:130] > # pinned_images = [
	I0920 21:50:03.701804   47651 command_runner.go:130] > # ]
	I0920 21:50:03.701809   47651 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0920 21:50:03.701815   47651 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0920 21:50:03.701823   47651 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0920 21:50:03.701831   47651 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0920 21:50:03.701838   47651 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0920 21:50:03.701842   47651 command_runner.go:130] > # signature_policy = ""
	I0920 21:50:03.701847   47651 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0920 21:50:03.701868   47651 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0920 21:50:03.701881   47651 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0920 21:50:03.701893   47651 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0920 21:50:03.701901   47651 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0920 21:50:03.701906   47651 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0920 21:50:03.701914   47651 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0920 21:50:03.701922   47651 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0920 21:50:03.701926   47651 command_runner.go:130] > # changing them here.
	I0920 21:50:03.701930   47651 command_runner.go:130] > # insecure_registries = [
	I0920 21:50:03.701935   47651 command_runner.go:130] > # ]
	I0920 21:50:03.701941   47651 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0920 21:50:03.701948   47651 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0920 21:50:03.701952   47651 command_runner.go:130] > # image_volumes = "mkdir"
	I0920 21:50:03.701961   47651 command_runner.go:130] > # Temporary directory to use for storing big files
	I0920 21:50:03.701967   47651 command_runner.go:130] > # big_files_temporary_dir = ""
	I0920 21:50:03.701973   47651 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0920 21:50:03.701979   47651 command_runner.go:130] > # CNI plugins.
	I0920 21:50:03.701984   47651 command_runner.go:130] > [crio.network]
	I0920 21:50:03.701992   47651 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0920 21:50:03.702000   47651 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0920 21:50:03.702005   47651 command_runner.go:130] > # cni_default_network = ""
	I0920 21:50:03.702012   47651 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0920 21:50:03.702017   47651 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0920 21:50:03.702024   47651 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0920 21:50:03.702031   47651 command_runner.go:130] > # plugin_dirs = [
	I0920 21:50:03.702035   47651 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0920 21:50:03.702040   47651 command_runner.go:130] > # ]
	I0920 21:50:03.702045   47651 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0920 21:50:03.702050   47651 command_runner.go:130] > [crio.metrics]
	I0920 21:50:03.702055   47651 command_runner.go:130] > # Globally enable or disable metrics support.
	I0920 21:50:03.702061   47651 command_runner.go:130] > enable_metrics = true
	I0920 21:50:03.702066   47651 command_runner.go:130] > # Specify enabled metrics collectors.
	I0920 21:50:03.702072   47651 command_runner.go:130] > # Per default all metrics are enabled.
	I0920 21:50:03.702078   47651 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0920 21:50:03.702086   47651 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0920 21:50:03.702094   47651 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0920 21:50:03.702099   47651 command_runner.go:130] > # metrics_collectors = [
	I0920 21:50:03.702102   47651 command_runner.go:130] > # 	"operations",
	I0920 21:50:03.702107   47651 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0920 21:50:03.702113   47651 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0920 21:50:03.702117   47651 command_runner.go:130] > # 	"operations_errors",
	I0920 21:50:03.702124   47651 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0920 21:50:03.702128   47651 command_runner.go:130] > # 	"image_pulls_by_name",
	I0920 21:50:03.702134   47651 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0920 21:50:03.702141   47651 command_runner.go:130] > # 	"image_pulls_failures",
	I0920 21:50:03.702148   47651 command_runner.go:130] > # 	"image_pulls_successes",
	I0920 21:50:03.702151   47651 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0920 21:50:03.702157   47651 command_runner.go:130] > # 	"image_layer_reuse",
	I0920 21:50:03.702162   47651 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0920 21:50:03.702168   47651 command_runner.go:130] > # 	"containers_oom_total",
	I0920 21:50:03.702175   47651 command_runner.go:130] > # 	"containers_oom",
	I0920 21:50:03.702182   47651 command_runner.go:130] > # 	"processes_defunct",
	I0920 21:50:03.702186   47651 command_runner.go:130] > # 	"operations_total",
	I0920 21:50:03.702191   47651 command_runner.go:130] > # 	"operations_latency_seconds",
	I0920 21:50:03.702198   47651 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0920 21:50:03.702202   47651 command_runner.go:130] > # 	"operations_errors_total",
	I0920 21:50:03.702212   47651 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0920 21:50:03.702219   47651 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0920 21:50:03.702225   47651 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0920 21:50:03.702230   47651 command_runner.go:130] > # 	"image_pulls_success_total",
	I0920 21:50:03.702236   47651 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0920 21:50:03.702240   47651 command_runner.go:130] > # 	"containers_oom_count_total",
	I0920 21:50:03.702247   47651 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0920 21:50:03.702251   47651 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0920 21:50:03.702257   47651 command_runner.go:130] > # ]
	I0920 21:50:03.702262   47651 command_runner.go:130] > # The port on which the metrics server will listen.
	I0920 21:50:03.702268   47651 command_runner.go:130] > # metrics_port = 9090
	I0920 21:50:03.702273   47651 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0920 21:50:03.702279   47651 command_runner.go:130] > # metrics_socket = ""
	I0920 21:50:03.702284   47651 command_runner.go:130] > # The certificate for the secure metrics server.
	I0920 21:50:03.702292   47651 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0920 21:50:03.702298   47651 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0920 21:50:03.702304   47651 command_runner.go:130] > # certificate on any modification event.
	I0920 21:50:03.702308   47651 command_runner.go:130] > # metrics_cert = ""
	I0920 21:50:03.702315   47651 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0920 21:50:03.702320   47651 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0920 21:50:03.702326   47651 command_runner.go:130] > # metrics_key = ""
	I0920 21:50:03.702331   47651 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0920 21:50:03.702337   47651 command_runner.go:130] > [crio.tracing]
	I0920 21:50:03.702343   47651 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0920 21:50:03.702349   47651 command_runner.go:130] > # enable_tracing = false
	I0920 21:50:03.702355   47651 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0920 21:50:03.702361   47651 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0920 21:50:03.702369   47651 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0920 21:50:03.702375   47651 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0920 21:50:03.702379   47651 command_runner.go:130] > # CRI-O NRI configuration.
	I0920 21:50:03.702385   47651 command_runner.go:130] > [crio.nri]
	I0920 21:50:03.702389   47651 command_runner.go:130] > # Globally enable or disable NRI.
	I0920 21:50:03.702395   47651 command_runner.go:130] > # enable_nri = false
	I0920 21:50:03.702402   47651 command_runner.go:130] > # NRI socket to listen on.
	I0920 21:50:03.702408   47651 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0920 21:50:03.702412   47651 command_runner.go:130] > # NRI plugin directory to use.
	I0920 21:50:03.702417   47651 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0920 21:50:03.702424   47651 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0920 21:50:03.702428   47651 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0920 21:50:03.702436   47651 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0920 21:50:03.702441   47651 command_runner.go:130] > # nri_disable_connections = false
	I0920 21:50:03.702448   47651 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0920 21:50:03.702453   47651 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0920 21:50:03.702459   47651 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0920 21:50:03.702464   47651 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0920 21:50:03.702471   47651 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0920 21:50:03.702475   47651 command_runner.go:130] > [crio.stats]
	I0920 21:50:03.702480   47651 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0920 21:50:03.702487   47651 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0920 21:50:03.702490   47651 command_runner.go:130] > # stats_collection_period = 0
	I0920 21:50:03.702555   47651 cni.go:84] Creating CNI manager for ""
	I0920 21:50:03.702565   47651 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0920 21:50:03.702573   47651 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 21:50:03.702593   47651 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.150 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-900777 NodeName:multinode-900777 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.150"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.150 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 21:50:03.702713   47651 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.150
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-900777"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.150
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.150"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 21:50:03.702770   47651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 21:50:03.714062   47651 command_runner.go:130] > kubeadm
	I0920 21:50:03.714087   47651 command_runner.go:130] > kubectl
	I0920 21:50:03.714094   47651 command_runner.go:130] > kubelet
	I0920 21:50:03.714125   47651 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 21:50:03.714169   47651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 21:50:03.724319   47651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 21:50:03.742504   47651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 21:50:03.760213   47651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0920 21:50:03.779056   47651 ssh_runner.go:195] Run: grep 192.168.39.150	control-plane.minikube.internal$ /etc/hosts
	I0920 21:50:03.783055   47651 command_runner.go:130] > 192.168.39.150	control-plane.minikube.internal
	I0920 21:50:03.783122   47651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 21:50:03.926637   47651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 21:50:03.942713   47651 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777 for IP: 192.168.39.150
	I0920 21:50:03.942740   47651 certs.go:194] generating shared ca certs ...
	I0920 21:50:03.942760   47651 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 21:50:03.943088   47651 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 21:50:03.943169   47651 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 21:50:03.943186   47651 certs.go:256] generating profile certs ...
	I0920 21:50:03.943301   47651 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/client.key
	I0920 21:50:03.943376   47651 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/apiserver.key.b3bcb748
	I0920 21:50:03.943428   47651 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/proxy-client.key
	I0920 21:50:03.943441   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0920 21:50:03.943468   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0920 21:50:03.943487   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0920 21:50:03.943503   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0920 21:50:03.943521   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0920 21:50:03.943540   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0920 21:50:03.943556   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0920 21:50:03.943572   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0920 21:50:03.943638   47651 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 21:50:03.943678   47651 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 21:50:03.943693   47651 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 21:50:03.943730   47651 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 21:50:03.943762   47651 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 21:50:03.943799   47651 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 21:50:03.943853   47651 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 21:50:03.943897   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> /usr/share/ca-certificates/167852.pem
	I0920 21:50:03.943918   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:50:03.943936   47651 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem -> /usr/share/ca-certificates/16785.pem
	I0920 21:50:03.944532   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 21:50:03.972117   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 21:50:03.998011   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 21:50:04.023056   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 21:50:04.048420   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 21:50:04.074099   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 21:50:04.100445   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 21:50:04.126047   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/multinode-900777/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 21:50:04.151031   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 21:50:04.177338   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 21:50:04.203759   47651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 21:50:04.229779   47651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 21:50:04.248051   47651 ssh_runner.go:195] Run: openssl version
	I0920 21:50:04.254129   47651 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0920 21:50:04.254313   47651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 21:50:04.265552   47651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 21:50:04.270197   47651 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:50:04.270348   47651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 21:50:04.270401   47651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 21:50:04.276947   47651 command_runner.go:130] > 51391683
	I0920 21:50:04.277112   47651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 21:50:04.286664   47651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 21:50:04.297835   47651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 21:50:04.302605   47651 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:50:04.302645   47651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 21:50:04.302683   47651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 21:50:04.308990   47651 command_runner.go:130] > 3ec20f2e
	I0920 21:50:04.309188   47651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 21:50:04.319386   47651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 21:50:04.330668   47651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:50:04.335269   47651 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:50:04.335304   47651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:50:04.335474   47651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 21:50:04.341501   47651 command_runner.go:130] > b5213941
	I0920 21:50:04.341644   47651 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 21:50:04.351275   47651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:50:04.356095   47651 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 21:50:04.356116   47651 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0920 21:50:04.356123   47651 command_runner.go:130] > Device: 253,1	Inode: 3148840     Links: 1
	I0920 21:50:04.356133   47651 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0920 21:50:04.356146   47651 command_runner.go:130] > Access: 2024-09-20 21:43:13.373431049 +0000
	I0920 21:50:04.356155   47651 command_runner.go:130] > Modify: 2024-09-20 21:43:13.373431049 +0000
	I0920 21:50:04.356163   47651 command_runner.go:130] > Change: 2024-09-20 21:43:13.373431049 +0000
	I0920 21:50:04.356168   47651 command_runner.go:130] >  Birth: 2024-09-20 21:43:13.373431049 +0000
	I0920 21:50:04.356238   47651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 21:50:04.364433   47651 command_runner.go:130] > Certificate will not expire
	I0920 21:50:04.364490   47651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 21:50:04.385950   47651 command_runner.go:130] > Certificate will not expire
	I0920 21:50:04.386157   47651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 21:50:04.392216   47651 command_runner.go:130] > Certificate will not expire
	I0920 21:50:04.392309   47651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 21:50:04.398206   47651 command_runner.go:130] > Certificate will not expire
	I0920 21:50:04.398275   47651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 21:50:04.404128   47651 command_runner.go:130] > Certificate will not expire
	I0920 21:50:04.404206   47651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 21:50:04.409828   47651 command_runner.go:130] > Certificate will not expire
	I0920 21:50:04.409999   47651 kubeadm.go:392] StartCluster: {Name:multinode-900777 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-900777 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.150 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.27 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.68 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:50:04.410129   47651 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 21:50:04.410182   47651 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 21:50:04.454898   47651 command_runner.go:130] > 6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700
	I0920 21:50:04.454929   47651 command_runner.go:130] > 18292c905057d024e4b4f79c3d9debea344f517f9ee8862727573806bbf2e218
	I0920 21:50:04.454937   47651 command_runner.go:130] > 582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69
	I0920 21:50:04.454947   47651 command_runner.go:130] > b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71
	I0920 21:50:04.454955   47651 command_runner.go:130] > c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf
	I0920 21:50:04.454977   47651 command_runner.go:130] > 50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2
	I0920 21:50:04.454985   47651 command_runner.go:130] > a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c
	I0920 21:50:04.455000   47651 command_runner.go:130] > e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded
	I0920 21:50:04.455020   47651 cri.go:89] found id: "6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700"
	I0920 21:50:04.455031   47651 cri.go:89] found id: "18292c905057d024e4b4f79c3d9debea344f517f9ee8862727573806bbf2e218"
	I0920 21:50:04.455037   47651 cri.go:89] found id: "582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69"
	I0920 21:50:04.455041   47651 cri.go:89] found id: "b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71"
	I0920 21:50:04.455046   47651 cri.go:89] found id: "c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf"
	I0920 21:50:04.455053   47651 cri.go:89] found id: "50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2"
	I0920 21:50:04.455057   47651 cri.go:89] found id: "a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c"
	I0920 21:50:04.455061   47651 cri.go:89] found id: "e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded"
	I0920 21:50:04.455065   47651 cri.go:89] found id: ""
	I0920 21:50:04.455113   47651 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.178214592Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869256178191946,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=953edda1-e095-4192-af70-3a62af16355c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.184914073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16b61aa5-f74d-46c3-8dfd-465f418b55fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.185001830Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16b61aa5-f74d-46c3-8dfd-465f418b55fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.185408009Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b67c7cba8719a32049e4aee34c28ca1fa38e057bb6af55ebcf1aaaf3d6dcdcc3,PodSandboxId:002a17acda15d99efc7f18487e1a07b00191315c2a6058721ba2e4b467062ca6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726869044227005857,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2eaf799ef1e114d03d8fe9185abf3d1452b91982ffd3a4cb07b23493f8d4dd,PodSandboxId:1e1f03dc85c5ea8b5feeffdfe16fb3f5e46382781396323056c3801189a23388,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726869010734485903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c0d06b49a314365610d58586a4fea3c88b6ac213fe03af9f77dadcf2f3fcb0f,PodSandboxId:373bd526668d676630150383b1136e622e26cdb9a836e1c549f9e1f4b3b99110,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726869010668734895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b969bded3ff4aef2da55e60e9ced5e18b8d96d0d048f4e00f573d49cca628577,PodSandboxId:c764e663a1bb1fc0b3c6dc48bdba681e1a1e83a59d2148e852bcbae344b6255c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726869010516122385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89-25d48c6a3159,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e9bd6143b497e036ef039ae65ce4d9eefed2027705383694b823b491e342bb,PodSandboxId:b4bfdb16e2b388b3c716616a44265ba1f22e34c8709e000e25f5d73b37f93e76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726869010540995575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a5eed7a8f74103c2fbc42a3fb9888b4b9de3c71f2b55703c39865edd7121d2,PodSandboxId:4608b5d9b987bfb6d083b69707304259fd8cc4b6cb57fa3cb25f28ed5f667bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726869006718968551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3d28feb205eb0202d9564b22496d32dcbd98dcd81fd0e9b25550f02a1b4453c,PodSandboxId:6086caea199f724c557dd3cd5a7d0b6cea3b1a10a89783414e0e5e668cd6e280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726869006724273726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63051e73e23ce7f5895e91a476b9224e3f5ced60b393fd21ae626596c804795,PodSandboxId:95dfcfa5b87014e54ec2e773f9b92ebb439cdd63f726d03b7bb8adc7bbdedb9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726869006655933247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4e566041f5a64eb6bce4e847954237a08871eaea900cf5beceb05f1652441d,PodSandboxId:daa96fb0b4999e22cc068871b578afd1d6c594d2a50b910e90f628edeec11292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726869006570861754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb805c9f26bae0c8d903fe2e8348e7ab09b471a2e1f66ed1272a1db9609454,PodSandboxId:f36cbf240c003ce08be7b934fa971263d6f9b704ba12cd4f2eaeff402d0ef932,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726868676463834955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700,PodSandboxId:20fe1c78f16126f484de74a9aa4a7def8fbcca40cf317d91456ef8d4ee2e1d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726868620713738655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18292c905057d024e4b4f79c3d9debea344f517f9ee8862727573806bbf2e218,PodSandboxId:e67606f45d94f0c6b4a217acc1be29a70acd15cae8e96b6be564f4115288cfcb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726868620664200244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71,PodSandboxId:0a31e2d64ec9c6cd53cfc7211ac1f08c1cda55df95229288dab5fde5ab05169f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726868608497884196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69,PodSandboxId:98f6fdf0aa6c6b653a9c8a72694a042f7e54ad0afabde59d67352f860e9c6fcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726868608615212878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89
-25d48c6a3159,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf,PodSandboxId:7e8ad24f26573237a25af82716e76ac2e2dbd79764d301e982764a3fcfb9fed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726868597584457509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2,PodSandboxId:38ec06816887a48b64a10f9f41b01ba7b94a495bcee0e34dbf41b2b652328935,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726868597579548680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c,PodSandboxId:6f66940a17df926a30d26a32d47576cbc4fd3b40ba7c89e1cc47bda9cd297c20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726868597529634055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded,PodSandboxId:3300e73a467f1209408708c9808f8f80889ea04afae94a100dcdbe961818a648,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726868597487676157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16b61aa5-f74d-46c3-8dfd-465f418b55fe name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.227746413Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=262ff9b9-8d00-4bbe-b237-2458125ab9c5 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.227824799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=262ff9b9-8d00-4bbe-b237-2458125ab9c5 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.228976035Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6a68a6b3-c713-4b95-9e93-ad53d176fc4f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.229556997Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869256229528971,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6a68a6b3-c713-4b95-9e93-ad53d176fc4f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.230001088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e995a3a5-be29-4269-aa77-6c025757171e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.230125197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e995a3a5-be29-4269-aa77-6c025757171e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.230457554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b67c7cba8719a32049e4aee34c28ca1fa38e057bb6af55ebcf1aaaf3d6dcdcc3,PodSandboxId:002a17acda15d99efc7f18487e1a07b00191315c2a6058721ba2e4b467062ca6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726869044227005857,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2eaf799ef1e114d03d8fe9185abf3d1452b91982ffd3a4cb07b23493f8d4dd,PodSandboxId:1e1f03dc85c5ea8b5feeffdfe16fb3f5e46382781396323056c3801189a23388,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726869010734485903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c0d06b49a314365610d58586a4fea3c88b6ac213fe03af9f77dadcf2f3fcb0f,PodSandboxId:373bd526668d676630150383b1136e622e26cdb9a836e1c549f9e1f4b3b99110,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726869010668734895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b969bded3ff4aef2da55e60e9ced5e18b8d96d0d048f4e00f573d49cca628577,PodSandboxId:c764e663a1bb1fc0b3c6dc48bdba681e1a1e83a59d2148e852bcbae344b6255c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726869010516122385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89-25d48c6a3159,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e9bd6143b497e036ef039ae65ce4d9eefed2027705383694b823b491e342bb,PodSandboxId:b4bfdb16e2b388b3c716616a44265ba1f22e34c8709e000e25f5d73b37f93e76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726869010540995575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a5eed7a8f74103c2fbc42a3fb9888b4b9de3c71f2b55703c39865edd7121d2,PodSandboxId:4608b5d9b987bfb6d083b69707304259fd8cc4b6cb57fa3cb25f28ed5f667bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726869006718968551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3d28feb205eb0202d9564b22496d32dcbd98dcd81fd0e9b25550f02a1b4453c,PodSandboxId:6086caea199f724c557dd3cd5a7d0b6cea3b1a10a89783414e0e5e668cd6e280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726869006724273726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63051e73e23ce7f5895e91a476b9224e3f5ced60b393fd21ae626596c804795,PodSandboxId:95dfcfa5b87014e54ec2e773f9b92ebb439cdd63f726d03b7bb8adc7bbdedb9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726869006655933247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4e566041f5a64eb6bce4e847954237a08871eaea900cf5beceb05f1652441d,PodSandboxId:daa96fb0b4999e22cc068871b578afd1d6c594d2a50b910e90f628edeec11292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726869006570861754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb805c9f26bae0c8d903fe2e8348e7ab09b471a2e1f66ed1272a1db9609454,PodSandboxId:f36cbf240c003ce08be7b934fa971263d6f9b704ba12cd4f2eaeff402d0ef932,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726868676463834955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700,PodSandboxId:20fe1c78f16126f484de74a9aa4a7def8fbcca40cf317d91456ef8d4ee2e1d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726868620713738655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18292c905057d024e4b4f79c3d9debea344f517f9ee8862727573806bbf2e218,PodSandboxId:e67606f45d94f0c6b4a217acc1be29a70acd15cae8e96b6be564f4115288cfcb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726868620664200244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71,PodSandboxId:0a31e2d64ec9c6cd53cfc7211ac1f08c1cda55df95229288dab5fde5ab05169f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726868608497884196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69,PodSandboxId:98f6fdf0aa6c6b653a9c8a72694a042f7e54ad0afabde59d67352f860e9c6fcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726868608615212878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89
-25d48c6a3159,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf,PodSandboxId:7e8ad24f26573237a25af82716e76ac2e2dbd79764d301e982764a3fcfb9fed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726868597584457509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2,PodSandboxId:38ec06816887a48b64a10f9f41b01ba7b94a495bcee0e34dbf41b2b652328935,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726868597579548680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c,PodSandboxId:6f66940a17df926a30d26a32d47576cbc4fd3b40ba7c89e1cc47bda9cd297c20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726868597529634055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded,PodSandboxId:3300e73a467f1209408708c9808f8f80889ea04afae94a100dcdbe961818a648,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726868597487676157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e995a3a5-be29-4269-aa77-6c025757171e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.273931435Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4621895c-857a-4a93-8fd0-acd8a9cc44b3 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.274090691Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4621895c-857a-4a93-8fd0-acd8a9cc44b3 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.275677781Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7b8fd9a9-534d-4d80-a9d9-d9211f4174d1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.276152943Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869256276127746,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b8fd9a9-534d-4d80-a9d9-d9211f4174d1 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.276786823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6f66a5f-87ea-439d-ac82-7e0b2fd8237d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.276862145Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6f66a5f-87ea-439d-ac82-7e0b2fd8237d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.277263332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b67c7cba8719a32049e4aee34c28ca1fa38e057bb6af55ebcf1aaaf3d6dcdcc3,PodSandboxId:002a17acda15d99efc7f18487e1a07b00191315c2a6058721ba2e4b467062ca6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726869044227005857,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2eaf799ef1e114d03d8fe9185abf3d1452b91982ffd3a4cb07b23493f8d4dd,PodSandboxId:1e1f03dc85c5ea8b5feeffdfe16fb3f5e46382781396323056c3801189a23388,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726869010734485903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c0d06b49a314365610d58586a4fea3c88b6ac213fe03af9f77dadcf2f3fcb0f,PodSandboxId:373bd526668d676630150383b1136e622e26cdb9a836e1c549f9e1f4b3b99110,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726869010668734895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b969bded3ff4aef2da55e60e9ced5e18b8d96d0d048f4e00f573d49cca628577,PodSandboxId:c764e663a1bb1fc0b3c6dc48bdba681e1a1e83a59d2148e852bcbae344b6255c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726869010516122385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89-25d48c6a3159,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e9bd6143b497e036ef039ae65ce4d9eefed2027705383694b823b491e342bb,PodSandboxId:b4bfdb16e2b388b3c716616a44265ba1f22e34c8709e000e25f5d73b37f93e76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726869010540995575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a5eed7a8f74103c2fbc42a3fb9888b4b9de3c71f2b55703c39865edd7121d2,PodSandboxId:4608b5d9b987bfb6d083b69707304259fd8cc4b6cb57fa3cb25f28ed5f667bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726869006718968551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3d28feb205eb0202d9564b22496d32dcbd98dcd81fd0e9b25550f02a1b4453c,PodSandboxId:6086caea199f724c557dd3cd5a7d0b6cea3b1a10a89783414e0e5e668cd6e280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726869006724273726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63051e73e23ce7f5895e91a476b9224e3f5ced60b393fd21ae626596c804795,PodSandboxId:95dfcfa5b87014e54ec2e773f9b92ebb439cdd63f726d03b7bb8adc7bbdedb9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726869006655933247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4e566041f5a64eb6bce4e847954237a08871eaea900cf5beceb05f1652441d,PodSandboxId:daa96fb0b4999e22cc068871b578afd1d6c594d2a50b910e90f628edeec11292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726869006570861754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb805c9f26bae0c8d903fe2e8348e7ab09b471a2e1f66ed1272a1db9609454,PodSandboxId:f36cbf240c003ce08be7b934fa971263d6f9b704ba12cd4f2eaeff402d0ef932,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726868676463834955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700,PodSandboxId:20fe1c78f16126f484de74a9aa4a7def8fbcca40cf317d91456ef8d4ee2e1d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726868620713738655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18292c905057d024e4b4f79c3d9debea344f517f9ee8862727573806bbf2e218,PodSandboxId:e67606f45d94f0c6b4a217acc1be29a70acd15cae8e96b6be564f4115288cfcb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726868620664200244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71,PodSandboxId:0a31e2d64ec9c6cd53cfc7211ac1f08c1cda55df95229288dab5fde5ab05169f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726868608497884196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69,PodSandboxId:98f6fdf0aa6c6b653a9c8a72694a042f7e54ad0afabde59d67352f860e9c6fcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726868608615212878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89
-25d48c6a3159,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf,PodSandboxId:7e8ad24f26573237a25af82716e76ac2e2dbd79764d301e982764a3fcfb9fed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726868597584457509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2,PodSandboxId:38ec06816887a48b64a10f9f41b01ba7b94a495bcee0e34dbf41b2b652328935,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726868597579548680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c,PodSandboxId:6f66940a17df926a30d26a32d47576cbc4fd3b40ba7c89e1cc47bda9cd297c20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726868597529634055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded,PodSandboxId:3300e73a467f1209408708c9808f8f80889ea04afae94a100dcdbe961818a648,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726868597487676157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6f66a5f-87ea-439d-ac82-7e0b2fd8237d name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.318919074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=67f3c5c1-56ee-4e16-bbf1-7fe36e3cbed6 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.319015996Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=67f3c5c1-56ee-4e16-bbf1-7fe36e3cbed6 name=/runtime.v1.RuntimeService/Version
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.320441545Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4840bf14-566e-4d99-86dc-0289350d933f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.320872886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869256320850636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4840bf14-566e-4d99-86dc-0289350d933f name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.321346162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3124a12e-2ec7-4e41-a099-246f09897c69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.321426191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3124a12e-2ec7-4e41-a099-246f09897c69 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 21:54:16 multinode-900777 crio[2735]: time="2024-09-20 21:54:16.321773843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:b67c7cba8719a32049e4aee34c28ca1fa38e057bb6af55ebcf1aaaf3d6dcdcc3,PodSandboxId:002a17acda15d99efc7f18487e1a07b00191315c2a6058721ba2e4b467062ca6,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1726869044227005857,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ce2eaf799ef1e114d03d8fe9185abf3d1452b91982ffd3a4cb07b23493f8d4dd,PodSandboxId:1e1f03dc85c5ea8b5feeffdfe16fb3f5e46382781396323056c3801189a23388,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_RUNNING,CreatedAt:1726869010734485903,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c0d06b49a314365610d58586a4fea3c88b6ac213fe03af9f77dadcf2f3fcb0f,PodSandboxId:373bd526668d676630150383b1136e622e26cdb9a836e1c549f9e1f4b3b99110,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726869010668734895,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\
":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b969bded3ff4aef2da55e60e9ced5e18b8d96d0d048f4e00f573d49cca628577,PodSandboxId:c764e663a1bb1fc0b3c6dc48bdba681e1a1e83a59d2148e852bcbae344b6255c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726869010516122385,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89-25d48c6a3159,},Annotations:map[string]
string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0e9bd6143b497e036ef039ae65ce4d9eefed2027705383694b823b491e342bb,PodSandboxId:b4bfdb16e2b388b3c716616a44265ba1f22e34c8709e000e25f5d73b37f93e76,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726869010540995575,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.ku
bernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a5eed7a8f74103c2fbc42a3fb9888b4b9de3c71f2b55703c39865edd7121d2,PodSandboxId:4608b5d9b987bfb6d083b69707304259fd8cc4b6cb57fa3cb25f28ed5f667bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726869006718968551,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3d28feb205eb0202d9564b22496d32dcbd98dcd81fd0e9b25550f02a1b4453c,PodSandboxId:6086caea199f724c557dd3cd5a7d0b6cea3b1a10a89783414e0e5e668cd6e280,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726869006724273726,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io.kub
ernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a63051e73e23ce7f5895e91a476b9224e3f5ced60b393fd21ae626596c804795,PodSandboxId:95dfcfa5b87014e54ec2e773f9b92ebb439cdd63f726d03b7bb8adc7bbdedb9b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726869006655933247,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd4e566041f5a64eb6bce4e847954237a08871eaea900cf5beceb05f1652441d,PodSandboxId:daa96fb0b4999e22cc068871b578afd1d6c594d2a50b910e90f628edeec11292,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726869006570861754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17bb805c9f26bae0c8d903fe2e8348e7ab09b471a2e1f66ed1272a1db9609454,PodSandboxId:f36cbf240c003ce08be7b934fa971263d6f9b704ba12cd4f2eaeff402d0ef932,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1726868676463834955,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-ggs4l,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fcfc4c27-f847-437f-a632-471c6642c3d8,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.res
tartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700,PodSandboxId:20fe1c78f16126f484de74a9aa4a7def8fbcca40cf317d91456ef8d4ee2e1d59,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726868620713738655,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jnskb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f6640ed1-0b3c-49af-affb-29bf066046f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contain
erPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:18292c905057d024e4b4f79c3d9debea344f517f9ee8862727573806bbf2e218,PodSandboxId:e67606f45d94f0c6b4a217acc1be29a70acd15cae8e96b6be564f4115288cfcb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726868620664200244,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 14baf822-f494-485b-b1fe-afeecdbfb9cc,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71,PodSandboxId:0a31e2d64ec9c6cd53cfc7211ac1f08c1cda55df95229288dab5fde5ab05169f,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f,State:CONTAINER_EXITED,CreatedAt:1726868608497884196,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-kqbn9,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 6a2c145b-1e65-4f8b-823c-b90bd655660d,},Annotations:map[string]string{io.kubernetes.container.hash: e80daca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69,PodSandboxId:98f6fdf0aa6c6b653a9c8a72694a042f7e54ad0afabde59d67352f860e9c6fcf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726868608615212878,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b7xqf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2fa34b2-4331-43bf-8e89
-25d48c6a3159,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf,PodSandboxId:7e8ad24f26573237a25af82716e76ac2e2dbd79764d301e982764a3fcfb9fed8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726868597584457509,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 185f9d7882eb079a94598b9c41884398,}
,Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2,PodSandboxId:38ec06816887a48b64a10f9f41b01ba7b94a495bcee0e34dbf41b2b652328935,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726868597579548680,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea9cacdcb60603cdf5d6018ac79a2bdf,},Annotations:map[string]string{io.kubernetes.
container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c,PodSandboxId:6f66940a17df926a30d26a32d47576cbc4fd3b40ba7c89e1cc47bda9cd297c20,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726868597529634055,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d22fcaafb639b98addccccb39bf92a8e,},Annotations:map[string]string{io
.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded,PodSandboxId:3300e73a467f1209408708c9808f8f80889ea04afae94a100dcdbe961818a648,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726868597487676157,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-900777,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a84f9db493d58f219895099656aff982,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3124a12e-2ec7-4e41-a099-246f09897c69 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b67c7cba8719a       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   002a17acda15d       busybox-7dff88458-ggs4l
	ce2eaf799ef1e       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      4 minutes ago       Running             kindnet-cni               1                   1e1f03dc85c5e       kindnet-kqbn9
	5c0d06b49a314       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      4 minutes ago       Running             coredns                   1                   373bd526668d6       coredns-7c65d6cfc9-jnskb
	c0e9bd6143b49       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   b4bfdb16e2b38       storage-provisioner
	b969bded3ff4a       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      4 minutes ago       Running             kube-proxy                1                   c764e663a1bb1       kube-proxy-b7xqf
	a3d28feb205eb       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      4 minutes ago       Running             kube-controller-manager   1                   6086caea199f7       kube-controller-manager-multinode-900777
	23a5eed7a8f74       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      4 minutes ago       Running             kube-scheduler            1                   4608b5d9b987b       kube-scheduler-multinode-900777
	a63051e73e23c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      4 minutes ago       Running             kube-apiserver            1                   95dfcfa5b8701       kube-apiserver-multinode-900777
	cd4e566041f5a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      4 minutes ago       Running             etcd                      1                   daa96fb0b4999       etcd-multinode-900777
	17bb805c9f26b       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   f36cbf240c003       busybox-7dff88458-ggs4l
	6f311f2a53987       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      10 minutes ago      Exited              coredns                   0                   20fe1c78f1612       coredns-7c65d6cfc9-jnskb
	18292c905057d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   e67606f45d94f       storage-provisioner
	582dd3e8149a0       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      10 minutes ago      Exited              kube-proxy                0                   98f6fdf0aa6c6       kube-proxy-b7xqf
	b6d798e155657       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                      10 minutes ago      Exited              kindnet-cni               0                   0a31e2d64ec9c       kindnet-kqbn9
	c67f551b7827a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      10 minutes ago      Exited              kube-scheduler            0                   7e8ad24f26573       kube-scheduler-multinode-900777
	50530a2c31bb0       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      10 minutes ago      Exited              etcd                      0                   38ec06816887a       etcd-multinode-900777
	a0931aca8b32f       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      10 minutes ago      Exited              kube-controller-manager   0                   6f66940a17df9       kube-controller-manager-multinode-900777
	e7fd49d874245       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      10 minutes ago      Exited              kube-apiserver            0                   3300e73a467f1       kube-apiserver-multinode-900777
	
	
	==> coredns [5c0d06b49a314365610d58586a4fea3c88b6ac213fe03af9f77dadcf2f3fcb0f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:49289 - 29305 "HINFO IN 5010109645727926349.4505930523966218435. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009798702s
	
	
	==> coredns [6f311f2a53987ca6d3fde3a78f269a1c79d7310a6095371b35b6227cd409c700] <==
	[INFO] 10.244.0.3:54063 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001915783s
	[INFO] 10.244.0.3:41011 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000091487s
	[INFO] 10.244.0.3:43261 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000058762s
	[INFO] 10.244.0.3:47557 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00141351s
	[INFO] 10.244.0.3:43181 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000062993s
	[INFO] 10.244.0.3:53004 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000079081s
	[INFO] 10.244.0.3:38072 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052934s
	[INFO] 10.244.1.2:37783 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153201s
	[INFO] 10.244.1.2:42084 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00016385s
	[INFO] 10.244.1.2:55726 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00010629s
	[INFO] 10.244.1.2:45953 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000122683s
	[INFO] 10.244.0.3:49442 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131355s
	[INFO] 10.244.0.3:50398 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125338s
	[INFO] 10.244.0.3:52137 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013082s
	[INFO] 10.244.0.3:48203 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008742s
	[INFO] 10.244.1.2:44751 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013303s
	[INFO] 10.244.1.2:51394 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000237966s
	[INFO] 10.244.1.2:58870 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141949s
	[INFO] 10.244.1.2:56924 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000133619s
	[INFO] 10.244.0.3:46117 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000141663s
	[INFO] 10.244.0.3:34248 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112414s
	[INFO] 10.244.0.3:35371 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000081051s
	[INFO] 10.244.0.3:47773 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083032s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-900777
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-900777
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=multinode-900777
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T21_43_24_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:43:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-900777
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:54:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 21:50:09 +0000   Fri, 20 Sep 2024 21:43:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 21:50:09 +0000   Fri, 20 Sep 2024 21:43:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 21:50:09 +0000   Fri, 20 Sep 2024 21:43:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 21:50:09 +0000   Fri, 20 Sep 2024 21:43:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.150
	  Hostname:    multinode-900777
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5e5c0de8ced04603878beaf61e2a1f5c
	  System UUID:                5e5c0de8-ced0-4603-878b-eaf61e2a1f5c
	  Boot ID:                    9bc3e81b-7635-43d7-95ad-dbda41fde27e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ggs4l                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m43s
	  kube-system                 coredns-7c65d6cfc9-jnskb                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     10m
	  kube-system                 etcd-multinode-900777                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         10m
	  kube-system                 kindnet-kqbn9                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-apiserver-multinode-900777             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-multinode-900777    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-b7xqf                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-multinode-900777             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 4m5s                   kube-proxy       
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x8 over 11m)      kubelet          Node multinode-900777 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 11m)      kubelet          Node multinode-900777 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 11m)      kubelet          Node multinode-900777 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-900777 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-900777 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-900777 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-900777 event: Registered Node multinode-900777 in Controller
	  Normal  NodeReady                10m                    kubelet          Node multinode-900777 status is now: NodeReady
	  Normal  Starting                 4m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m10s (x8 over 4m10s)  kubelet          Node multinode-900777 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x8 over 4m10s)  kubelet          Node multinode-900777 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x7 over 4m10s)  kubelet          Node multinode-900777 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                   node-controller  Node multinode-900777 event: Registered Node multinode-900777 in Controller
	
	
	Name:               multinode-900777-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-900777-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=multinode-900777
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_20T21_50_51_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 21:50:50 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-900777-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 21:51:52 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 20 Sep 2024 21:51:21 +0000   Fri, 20 Sep 2024 21:52:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 20 Sep 2024 21:51:21 +0000   Fri, 20 Sep 2024 21:52:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 20 Sep 2024 21:51:21 +0000   Fri, 20 Sep 2024 21:52:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 20 Sep 2024 21:51:21 +0000   Fri, 20 Sep 2024 21:52:33 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.27
	  Hostname:    multinode-900777-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 aac29acd9c27493a8fe0dfbc425b686c
	  System UUID:                aac29acd-9c27-493a-8fe0-dfbc425b686c
	  Boot ID:                    35a12f4b-ee8e-4b62-9b15-87db6c8e7c97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-8ggmr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 kindnet-7b5r2              100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      10m
	  kube-system                 kube-proxy-rghsp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  Starting                 9m58s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet          Node multinode-900777-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet          Node multinode-900777-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet          Node multinode-900777-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m45s                  kubelet          Node multinode-900777-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m26s (x2 over 3m26s)  kubelet          Node multinode-900777-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m26s (x2 over 3m26s)  kubelet          Node multinode-900777-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m26s (x2 over 3m26s)  kubelet          Node multinode-900777-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m8s                   kubelet          Node multinode-900777-m02 status is now: NodeReady
	  Normal  NodeNotReady             103s                   node-controller  Node multinode-900777-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.057422] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.183705] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.127886] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.288084] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.005940] systemd-fstab-generator[751]: Ignoring "noauto" option for root device
	[  +5.012101] systemd-fstab-generator[893]: Ignoring "noauto" option for root device
	[  +0.061195] kauditd_printk_skb: 158 callbacks suppressed
	[  +6.501434] systemd-fstab-generator[1228]: Ignoring "noauto" option for root device
	[  +0.081672] kauditd_printk_skb: 69 callbacks suppressed
	[  +4.631084] systemd-fstab-generator[1329]: Ignoring "noauto" option for root device
	[  +0.816500] kauditd_printk_skb: 43 callbacks suppressed
	[ +12.266700] kauditd_printk_skb: 38 callbacks suppressed
	[Sep20 21:44] kauditd_printk_skb: 14 callbacks suppressed
	[Sep20 21:49] systemd-fstab-generator[2660]: Ignoring "noauto" option for root device
	[  +0.151088] systemd-fstab-generator[2672]: Ignoring "noauto" option for root device
	[  +0.177757] systemd-fstab-generator[2686]: Ignoring "noauto" option for root device
	[  +0.149739] systemd-fstab-generator[2699]: Ignoring "noauto" option for root device
	[  +0.317927] systemd-fstab-generator[2727]: Ignoring "noauto" option for root device
	[Sep20 21:50] systemd-fstab-generator[2819]: Ignoring "noauto" option for root device
	[  +0.086622] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.810771] systemd-fstab-generator[2941]: Ignoring "noauto" option for root device
	[  +4.710755] kauditd_printk_skb: 74 callbacks suppressed
	[ +14.833751] systemd-fstab-generator[3781]: Ignoring "noauto" option for root device
	[  +0.102721] kauditd_printk_skb: 34 callbacks suppressed
	[ +18.803094] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [50530a2c31bb0d7553e28af3cdfab8ed89fa1a20671236c7eaf9e1e0a23d5df2] <==
	{"level":"info","ts":"2024-09-20T21:43:18.031301Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T21:43:18.033379Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2236e2deb63504cb","local-member-attributes":"{Name:multinode-900777 ClientURLs:[https://192.168.39.150:2379]}","request-path":"/0/members/2236e2deb63504cb/attributes","cluster-id":"d5d2d7cf60dc9e96","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T21:43:18.035089Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T21:43:18.035483Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T21:43:18.036189Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d5d2d7cf60dc9e96","local-member-id":"2236e2deb63504cb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T21:43:18.036271Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T21:43:18.036307Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T21:43:18.036924Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T21:43:18.037719Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.150:2379"}
	{"level":"info","ts":"2024-09-20T21:43:18.043508Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T21:43:18.044753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T21:43:18.043595Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T21:43:18.049813Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T21:45:08.064209Z","caller":"traceutil/trace.go:171","msg":"trace[556783716] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"116.534036ms","start":"2024-09-20T21:45:07.947615Z","end":"2024-09-20T21:45:08.064149Z","steps":["trace[556783716] 'process raft request'  (duration: 101.269657ms)","trace[556783716] 'compare'  (duration: 14.908228ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T21:46:02.323411Z","caller":"traceutil/trace.go:171","msg":"trace[2016325494] transaction","detail":"{read_only:false; response_revision:742; number_of_response:1; }","duration":"123.459253ms","start":"2024-09-20T21:46:02.199926Z","end":"2024-09-20T21:46:02.323385Z","steps":["trace[2016325494] 'process raft request'  (duration: 122.349125ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T21:48:21.940771Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-20T21:48:21.940903Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"multinode-900777","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.150:2380"],"advertise-client-urls":["https://192.168.39.150:2379"]}
	{"level":"warn","ts":"2024-09-20T21:48:21.942167Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T21:48:21.942333Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T21:48:22.024685Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.150:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-20T21:48:22.024857Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.150:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-20T21:48:22.024957Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"2236e2deb63504cb","current-leader-member-id":"2236e2deb63504cb"}
	{"level":"info","ts":"2024-09-20T21:48:22.031386Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-09-20T21:48:22.031587Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-09-20T21:48:22.031615Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"multinode-900777","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.150:2380"],"advertise-client-urls":["https://192.168.39.150:2379"]}
	
	
	==> etcd [cd4e566041f5a64eb6bce4e847954237a08871eaea900cf5beceb05f1652441d] <==
	{"level":"info","ts":"2024-09-20T21:50:06.977412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb switched to configuration voters=(2465407292199470283)"}
	{"level":"info","ts":"2024-09-20T21:50:06.977633Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d5d2d7cf60dc9e96","local-member-id":"2236e2deb63504cb","added-peer-id":"2236e2deb63504cb","added-peer-peer-urls":["https://192.168.39.150:2380"]}
	{"level":"info","ts":"2024-09-20T21:50:06.977973Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d5d2d7cf60dc9e96","local-member-id":"2236e2deb63504cb","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T21:50:06.978090Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T21:50:07.011818Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T21:50:07.013660Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"2236e2deb63504cb","initial-advertise-peer-urls":["https://192.168.39.150:2380"],"listen-peer-urls":["https://192.168.39.150:2380"],"advertise-client-urls":["https://192.168.39.150:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.150:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T21:50:07.013705Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T21:50:07.013334Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-09-20T21:50:07.013736Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.150:2380"}
	{"level":"info","ts":"2024-09-20T21:50:08.225118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T21:50:08.225249Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T21:50:08.225311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb received MsgPreVoteResp from 2236e2deb63504cb at term 2"}
	{"level":"info","ts":"2024-09-20T21:50:08.225351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T21:50:08.225378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb received MsgVoteResp from 2236e2deb63504cb at term 3"}
	{"level":"info","ts":"2024-09-20T21:50:08.225406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2236e2deb63504cb became leader at term 3"}
	{"level":"info","ts":"2024-09-20T21:50:08.225432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 2236e2deb63504cb elected leader 2236e2deb63504cb at term 3"}
	{"level":"info","ts":"2024-09-20T21:50:08.232737Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T21:50:08.233864Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T21:50:08.234701Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T21:50:08.232686Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"2236e2deb63504cb","local-member-attributes":"{Name:multinode-900777 ClientURLs:[https://192.168.39.150:2379]}","request-path":"/0/members/2236e2deb63504cb/attributes","cluster-id":"d5d2d7cf60dc9e96","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T21:50:08.235566Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T21:50:08.235818Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T21:50:08.235853Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T21:50:08.236604Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T21:50:08.237391Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.150:2379"}
	
	
	==> kernel <==
	 21:54:16 up 11 min,  0 users,  load average: 0.11, 0.12, 0.09
	Linux multinode-900777 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b6d798e1556576e5dffdf23edd5f4b4a03388f16f4a42c334d1f792bf68a7a71] <==
	I0920 21:47:39.733970       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.4.0/24] 
	I0920 21:47:49.733797       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:47:49.733969       1 main.go:299] handling current node
	I0920 21:47:49.734003       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:47:49.734068       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:47:49.734229       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0920 21:47:49.734255       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.4.0/24] 
	I0920 21:47:59.731367       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:47:59.731467       1 main.go:299] handling current node
	I0920 21:47:59.731494       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:47:59.731500       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:47:59.731633       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0920 21:47:59.731658       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.4.0/24] 
	I0920 21:48:09.729138       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:48:09.729274       1 main.go:299] handling current node
	I0920 21:48:09.729314       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:48:09.729321       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:48:09.729494       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0920 21:48:09.729518       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.4.0/24] 
	I0920 21:48:19.727605       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:48:19.727725       1 main.go:299] handling current node
	I0920 21:48:19.727772       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:48:19.727791       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:48:19.727928       1 main.go:295] Handling node with IPs: map[192.168.39.68:{}]
	I0920 21:48:19.727949       1 main.go:322] Node multinode-900777-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [ce2eaf799ef1e114d03d8fe9185abf3d1452b91982ffd3a4cb07b23493f8d4dd] <==
	I0920 21:53:11.739689       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:53:21.742554       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:53:21.742663       1 main.go:299] handling current node
	I0920 21:53:21.742691       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:53:21.742709       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:53:31.740139       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:53:31.740238       1 main.go:299] handling current node
	I0920 21:53:31.740266       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:53:31.740283       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:53:41.741146       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:53:41.741195       1 main.go:299] handling current node
	I0920 21:53:41.741210       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:53:41.741215       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:53:51.742820       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:53:51.742886       1 main.go:299] handling current node
	I0920 21:53:51.742908       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:53:51.742914       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:54:01.749094       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:54:01.749243       1 main.go:299] handling current node
	I0920 21:54:01.749279       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:54:01.749298       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	I0920 21:54:11.739508       1 main.go:295] Handling node with IPs: map[192.168.39.150:{}]
	I0920 21:54:11.739600       1 main.go:299] handling current node
	I0920 21:54:11.739644       1 main.go:295] Handling node with IPs: map[192.168.39.27:{}]
	I0920 21:54:11.739649       1 main.go:322] Node multinode-900777-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [a63051e73e23ce7f5895e91a476b9224e3f5ced60b393fd21ae626596c804795] <==
	I0920 21:50:09.544632       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 21:50:09.550589       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 21:50:09.557990       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 21:50:09.558135       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 21:50:09.558527       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 21:50:09.558631       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 21:50:09.558703       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 21:50:09.558739       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 21:50:09.561483       1 aggregator.go:171] initial CRD sync complete...
	I0920 21:50:09.561535       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 21:50:09.561559       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 21:50:09.561582       1 cache.go:39] Caches are synced for autoregister controller
	I0920 21:50:09.576711       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0920 21:50:09.586012       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 21:50:09.606963       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0920 21:50:09.607136       1 policy_source.go:224] refreshing policies
	I0920 21:50:09.619249       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 21:50:10.456236       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 21:50:11.971962       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 21:50:12.096161       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 21:50:12.110202       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 21:50:12.202890       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 21:50:12.210445       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 21:50:13.046713       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 21:50:13.146014       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [e7fd49d874245d0078740b98a6649b67da98bb4e0b2d2e506880e68b70a27ded] <==
	W0920 21:48:21.966642       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966692       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966725       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966777       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966831       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966887       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966925       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.966961       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.967009       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.967483       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.967538       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.967584       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.967630       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.967679       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.969207       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.969450       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970213       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970389       1 logging.go:55] [core] [Channel #172 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970457       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970505       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970558       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970615       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970679       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970734       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 21:48:21.970791       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [a0931aca8b32fec5fcc9efced0670a87819fd0fc3a3d653c8bf6361eb7a90a6c] <==
	I0920 21:45:56.783819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:45:56.785229       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-900777-m02"
	I0920 21:45:57.965371       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-900777-m03\" does not exist"
	I0920 21:45:57.966458       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-900777-m02"
	I0920 21:45:57.978491       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-900777-m03" podCIDRs=["10.244.4.0/24"]
	I0920 21:45:57.978637       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:45:57.978843       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:45:57.987155       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:45:58.334473       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:45:58.690545       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:46:02.325485       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:46:08.123683       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:46:16.063007       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-900777-m02"
	I0920 21:46:16.063191       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:46:16.082137       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:46:17.201175       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:46:57.219331       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m02"
	I0920 21:46:57.219436       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-900777-m03"
	I0920 21:46:57.236631       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m02"
	I0920 21:46:57.281190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="14.093221ms"
	I0920 21:46:57.281278       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="36.403µs"
	I0920 21:47:02.278168       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:47:02.295551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:47:02.341536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m02"
	I0920 21:47:12.425995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	
	
	==> kube-controller-manager [a3d28feb205eb0202d9564b22496d32dcbd98dcd81fd0e9b25550f02a1b4453c] <==
	I0920 21:51:28.192908       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-900777-m03" podCIDRs=["10.244.2.0/24"]
	I0920 21:51:28.193306       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:28.193390       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:28.202903       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:28.582112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:28.931858       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:33.113890       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:38.276446       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:46.297350       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-900777-m02"
	I0920 21:51:46.297592       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:46.310503       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:48.069787       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:51.158772       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:51.181950       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:51.757174       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m03"
	I0920 21:51:51.757247       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-900777-m02"
	I0920 21:52:33.010737       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-665pn"
	I0920 21:52:33.045893       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-665pn"
	I0920 21:52:33.046082       1 gc_controller.go:342] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-xbgx7"
	I0920 21:52:33.075079       1 gc_controller.go:258] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-xbgx7"
	I0920 21:52:33.087689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m02"
	I0920 21:52:33.121184       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="20.950174ms"
	I0920 21:52:33.122330       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="31.785µs"
	I0920 21:52:33.122469       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m02"
	I0920 21:52:38.179415       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-900777-m02"
	
	
	==> kube-proxy [582dd3e8149a0843a693c5cb0e42f6b58426277bf245dbe17b212cd85e5e1c69] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 21:43:28.843494       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 21:43:28.912912       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.150"]
	E0920 21:43:28.913175       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 21:43:28.950442       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 21:43:28.950547       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 21:43:28.950592       1 server_linux.go:169] "Using iptables Proxier"
	I0920 21:43:28.953189       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 21:43:28.953613       1 server.go:483] "Version info" version="v1.31.1"
	I0920 21:43:28.953856       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:43:28.956425       1 config.go:199] "Starting service config controller"
	I0920 21:43:28.956521       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 21:43:28.956600       1 config.go:105] "Starting endpoint slice config controller"
	I0920 21:43:28.956644       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 21:43:28.957529       1 config.go:328] "Starting node config controller"
	I0920 21:43:28.959149       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 21:43:29.057242       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 21:43:29.057413       1 shared_informer.go:320] Caches are synced for service config
	I0920 21:43:29.059257       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b969bded3ff4aef2da55e60e9ced5e18b8d96d0d048f4e00f573d49cca628577] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 21:50:10.994121       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 21:50:11.007407       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.150"]
	E0920 21:50:11.007518       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 21:50:11.064058       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 21:50:11.064127       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 21:50:11.064154       1 server_linux.go:169] "Using iptables Proxier"
	I0920 21:50:11.068962       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 21:50:11.069291       1 server.go:483] "Version info" version="v1.31.1"
	I0920 21:50:11.069321       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:50:11.072356       1 config.go:199] "Starting service config controller"
	I0920 21:50:11.072441       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 21:50:11.072461       1 config.go:105] "Starting endpoint slice config controller"
	I0920 21:50:11.072465       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 21:50:11.073565       1 config.go:328] "Starting node config controller"
	I0920 21:50:11.073646       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 21:50:11.172724       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 21:50:11.172788       1 shared_informer.go:320] Caches are synced for service config
	I0920 21:50:11.174309       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [23a5eed7a8f74103c2fbc42a3fb9888b4b9de3c71f2b55703c39865edd7121d2] <==
	I0920 21:50:07.628730       1 serving.go:386] Generated self-signed cert in-memory
	W0920 21:50:09.503291       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 21:50:09.503391       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0920 21:50:09.503426       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 21:50:09.503456       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 21:50:09.533462       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 21:50:09.533584       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 21:50:09.537639       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 21:50:09.537733       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 21:50:09.537766       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 21:50:09.537921       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 21:50:09.638351       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c67f551b7827a56f7eae4efa51729e08ee7e5088100e2a3f7033b63a2572f5cf] <==
	E0920 21:43:21.211148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.302399       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 21:43:21.302466       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.320135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 21:43:21.320188       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.351467       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 21:43:21.353198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.374401       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 21:43:21.374510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.418731       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 21:43:21.418788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.473809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 21:43:21.473937       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.522691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 21:43:21.522725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.542679       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 21:43:21.542816       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.554897       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 21:43:21.554996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 21:43:21.833319       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 21:43:21.833518       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 21:43:24.012574       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 21:48:21.936369       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0920 21:48:21.936517       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0920 21:48:21.936688       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 20 21:53:05 multinode-900777 kubelet[2948]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 21:53:05 multinode-900777 kubelet[2948]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 21:53:05 multinode-900777 kubelet[2948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:53:05 multinode-900777 kubelet[2948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:53:06 multinode-900777 kubelet[2948]: E0920 21:53:06.070861    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869186070194231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:53:06 multinode-900777 kubelet[2948]: E0920 21:53:06.070912    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869186070194231,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:53:16 multinode-900777 kubelet[2948]: E0920 21:53:16.072302    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869196071967751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:53:16 multinode-900777 kubelet[2948]: E0920 21:53:16.072334    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869196071967751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:53:26 multinode-900777 kubelet[2948]: E0920 21:53:26.075146    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869206074563323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:53:26 multinode-900777 kubelet[2948]: E0920 21:53:26.075499    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869206074563323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:53:36 multinode-900777 kubelet[2948]: E0920 21:53:36.077118    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869216076773285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:53:36 multinode-900777 kubelet[2948]: E0920 21:53:36.077207    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869216076773285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:53:46 multinode-900777 kubelet[2948]: E0920 21:53:46.081644    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869226079186695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:53:46 multinode-900777 kubelet[2948]: E0920 21:53:46.081688    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869226079186695,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:53:56 multinode-900777 kubelet[2948]: E0920 21:53:56.084845    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869236084466989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:53:56 multinode-900777 kubelet[2948]: E0920 21:53:56.084887    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869236084466989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:54:05 multinode-900777 kubelet[2948]: E0920 21:54:05.998102    2948 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 21:54:05 multinode-900777 kubelet[2948]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 21:54:05 multinode-900777 kubelet[2948]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 21:54:05 multinode-900777 kubelet[2948]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 21:54:05 multinode-900777 kubelet[2948]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 21:54:06 multinode-900777 kubelet[2948]: E0920 21:54:06.087636    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869246086872093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:54:06 multinode-900777 kubelet[2948]: E0920 21:54:06.087736    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869246086872093,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:54:16 multinode-900777 kubelet[2948]: E0920 21:54:16.090257    2948 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869256089817957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 21:54:16 multinode-900777 kubelet[2948]: E0920 21:54:16.090310    2948 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726869256089817957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134599,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 21:54:15.900828   49628 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19672-9614/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-900777 -n multinode-900777
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-900777 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (144.86s)

                                                
                                    
x
+
TestPreload (270.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-009846 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0920 21:59:11.808941   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:00:09.131852   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-009846 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m8.429869107s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-009846 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-009846 image pull gcr.io/k8s-minikube/busybox: (2.277125339s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-009846
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-009846: exit status 82 (2m0.466889158s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-009846"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-009846 failed: exit status 82
panic.go:629: *** TestPreload FAILED at 2024-09-20 22:02:16.306304822 +0000 UTC m=+4476.437439796
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-009846 -n test-preload-009846
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-009846 -n test-preload-009846: exit status 3 (18.676805596s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:02:34.979380   52489 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host
	E0920 22:02:34.979404   52489 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-009846" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-009846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-009846
--- FAIL: TestPreload (270.75s)

                                                
                                    
x
+
TestKubernetesUpgrade (411.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-696728 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-696728 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m2.122952622s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-696728] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-696728" primary control-plane node in "kubernetes-upgrade-696728" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 22:04:29.552886   53566 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:04:29.552999   53566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:04:29.553003   53566 out.go:358] Setting ErrFile to fd 2...
	I0920 22:04:29.553009   53566 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:04:29.553273   53566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:04:29.553924   53566 out.go:352] Setting JSON to false
	I0920 22:04:29.555049   53566 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6418,"bootTime":1726863452,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:04:29.555174   53566 start.go:139] virtualization: kvm guest
	I0920 22:04:29.557185   53566 out.go:177] * [kubernetes-upgrade-696728] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:04:29.558682   53566 notify.go:220] Checking for updates...
	I0920 22:04:29.559263   53566 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:04:29.561748   53566 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:04:29.564576   53566 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:04:29.566871   53566 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:04:29.568212   53566 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:04:29.570945   53566 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:04:29.572423   53566 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:04:29.611868   53566 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 22:04:29.614621   53566 start.go:297] selected driver: kvm2
	I0920 22:04:29.614635   53566 start.go:901] validating driver "kvm2" against <nil>
	I0920 22:04:29.614649   53566 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:04:29.615456   53566 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:04:29.631679   53566 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 22:04:29.649118   53566 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 22:04:29.649181   53566 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 22:04:29.649528   53566 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 22:04:29.649562   53566 cni.go:84] Creating CNI manager for ""
	I0920 22:04:29.649625   53566 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:04:29.649652   53566 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 22:04:29.649717   53566 start.go:340] cluster config:
	{Name:kubernetes-upgrade-696728 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-696728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:04:29.649867   53566 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:04:29.652031   53566 out.go:177] * Starting "kubernetes-upgrade-696728" primary control-plane node in "kubernetes-upgrade-696728" cluster
	I0920 22:04:29.653506   53566 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:04:29.653555   53566 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 22:04:29.653577   53566 cache.go:56] Caching tarball of preloaded images
	I0920 22:04:29.653666   53566 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 22:04:29.653680   53566 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 22:04:29.654117   53566 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/config.json ...
	I0920 22:04:29.654150   53566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/config.json: {Name:mk8bc5484e16b12d13b9a587f95949230e5fb639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:04:29.654332   53566 start.go:360] acquireMachinesLock for kubernetes-upgrade-696728: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:04:58.416402   53566 start.go:364] duration metric: took 28.762034988s to acquireMachinesLock for "kubernetes-upgrade-696728"
	I0920 22:04:58.416486   53566 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-696728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-696728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:04:58.416600   53566 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 22:04:58.419376   53566 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 22:04:58.419547   53566 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:04:58.419611   53566 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:04:58.435628   53566 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37093
	I0920 22:04:58.436111   53566 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:04:58.436649   53566 main.go:141] libmachine: Using API Version  1
	I0920 22:04:58.436669   53566 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:04:58.437006   53566 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:04:58.437154   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetMachineName
	I0920 22:04:58.437294   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .DriverName
	I0920 22:04:58.437409   53566 start.go:159] libmachine.API.Create for "kubernetes-upgrade-696728" (driver="kvm2")
	I0920 22:04:58.437443   53566 client.go:168] LocalClient.Create starting
	I0920 22:04:58.437475   53566 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 22:04:58.437519   53566 main.go:141] libmachine: Decoding PEM data...
	I0920 22:04:58.437543   53566 main.go:141] libmachine: Parsing certificate...
	I0920 22:04:58.437611   53566 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 22:04:58.437639   53566 main.go:141] libmachine: Decoding PEM data...
	I0920 22:04:58.437659   53566 main.go:141] libmachine: Parsing certificate...
	I0920 22:04:58.437684   53566 main.go:141] libmachine: Running pre-create checks...
	I0920 22:04:58.437702   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .PreCreateCheck
	I0920 22:04:58.438024   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetConfigRaw
	I0920 22:04:58.438376   53566 main.go:141] libmachine: Creating machine...
	I0920 22:04:58.438392   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .Create
	I0920 22:04:58.438509   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Creating KVM machine...
	I0920 22:04:58.439563   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found existing default KVM network
	I0920 22:04:58.440425   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:04:58.440280   53920 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:ec:9d:cd} reservation:<nil>}
	I0920 22:04:58.440968   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:04:58.440901   53920 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0001236e0}
	I0920 22:04:58.441028   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | created network xml: 
	I0920 22:04:58.441047   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | <network>
	I0920 22:04:58.441066   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG |   <name>mk-kubernetes-upgrade-696728</name>
	I0920 22:04:58.441079   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG |   <dns enable='no'/>
	I0920 22:04:58.441087   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG |   
	I0920 22:04:58.441095   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0920 22:04:58.441103   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG |     <dhcp>
	I0920 22:04:58.441116   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0920 22:04:58.441136   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG |     </dhcp>
	I0920 22:04:58.441151   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG |   </ip>
	I0920 22:04:58.441162   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG |   
	I0920 22:04:58.441182   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | </network>
	I0920 22:04:58.441194   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | 
	I0920 22:04:58.446198   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | trying to create private KVM network mk-kubernetes-upgrade-696728 192.168.50.0/24...
	I0920 22:04:58.519950   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | private KVM network mk-kubernetes-upgrade-696728 192.168.50.0/24 created
	I0920 22:04:58.520005   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:04:58.519843   53920 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:04:58.520026   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728 ...
	I0920 22:04:58.520053   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 22:04:58.520069   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 22:04:58.764130   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:04:58.764018   53920 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728/id_rsa...
	I0920 22:04:58.900189   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:04:58.900040   53920 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728/kubernetes-upgrade-696728.rawdisk...
	I0920 22:04:58.900218   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Writing magic tar header
	I0920 22:04:58.900233   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Writing SSH key tar header
	I0920 22:04:58.900246   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:04:58.900164   53920 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728 ...
	I0920 22:04:58.900261   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728
	I0920 22:04:58.900330   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728 (perms=drwx------)
	I0920 22:04:58.900350   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 22:04:58.900363   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 22:04:58.900383   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 22:04:58.900398   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:04:58.900422   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 22:04:58.900436   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 22:04:58.900454   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 22:04:58.900466   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Checking permissions on dir: /home/jenkins
	I0920 22:04:58.900478   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Checking permissions on dir: /home
	I0920 22:04:58.900495   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 22:04:58.900509   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 22:04:58.900521   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Skipping /home - not owner
	I0920 22:04:58.900537   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Creating domain...
	I0920 22:04:58.901700   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) define libvirt domain using xml: 
	I0920 22:04:58.901726   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) <domain type='kvm'>
	I0920 22:04:58.901738   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)   <name>kubernetes-upgrade-696728</name>
	I0920 22:04:58.901750   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)   <memory unit='MiB'>2200</memory>
	I0920 22:04:58.901777   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)   <vcpu>2</vcpu>
	I0920 22:04:58.901789   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)   <features>
	I0920 22:04:58.901822   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <acpi/>
	I0920 22:04:58.901861   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <apic/>
	I0920 22:04:58.901876   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <pae/>
	I0920 22:04:58.901887   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     
	I0920 22:04:58.901905   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)   </features>
	I0920 22:04:58.901920   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)   <cpu mode='host-passthrough'>
	I0920 22:04:58.901934   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)   
	I0920 22:04:58.901946   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)   </cpu>
	I0920 22:04:58.901957   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)   <os>
	I0920 22:04:58.901970   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <type>hvm</type>
	I0920 22:04:58.901990   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <boot dev='cdrom'/>
	I0920 22:04:58.902007   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <boot dev='hd'/>
	I0920 22:04:58.902019   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <bootmenu enable='no'/>
	I0920 22:04:58.902029   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)   </os>
	I0920 22:04:58.902038   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)   <devices>
	I0920 22:04:58.902049   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <disk type='file' device='cdrom'>
	I0920 22:04:58.902066   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728/boot2docker.iso'/>
	I0920 22:04:58.902077   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)       <target dev='hdc' bus='scsi'/>
	I0920 22:04:58.902088   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)       <readonly/>
	I0920 22:04:58.902106   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     </disk>
	I0920 22:04:58.902127   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <disk type='file' device='disk'>
	I0920 22:04:58.902145   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 22:04:58.902165   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728/kubernetes-upgrade-696728.rawdisk'/>
	I0920 22:04:58.902177   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)       <target dev='hda' bus='virtio'/>
	I0920 22:04:58.902187   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     </disk>
	I0920 22:04:58.902199   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <interface type='network'>
	I0920 22:04:58.902210   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)       <source network='mk-kubernetes-upgrade-696728'/>
	I0920 22:04:58.902222   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)       <model type='virtio'/>
	I0920 22:04:58.902229   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     </interface>
	I0920 22:04:58.902241   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <interface type='network'>
	I0920 22:04:58.902248   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)       <source network='default'/>
	I0920 22:04:58.902262   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)       <model type='virtio'/>
	I0920 22:04:58.902271   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     </interface>
	I0920 22:04:58.902279   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <serial type='pty'>
	I0920 22:04:58.902289   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)       <target port='0'/>
	I0920 22:04:58.902297   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     </serial>
	I0920 22:04:58.902306   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <console type='pty'>
	I0920 22:04:58.902316   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)       <target type='serial' port='0'/>
	I0920 22:04:58.902330   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     </console>
	I0920 22:04:58.902341   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     <rng model='virtio'>
	I0920 22:04:58.902350   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)       <backend model='random'>/dev/random</backend>
	I0920 22:04:58.902361   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     </rng>
	I0920 22:04:58.902370   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     
	I0920 22:04:58.902378   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)     
	I0920 22:04:58.902387   53566 main.go:141] libmachine: (kubernetes-upgrade-696728)   </devices>
	I0920 22:04:58.902396   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) </domain>
	I0920 22:04:58.902415   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) 
	I0920 22:04:58.909116   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:48:3a:50 in network default
	I0920 22:04:58.909759   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:04:58.909798   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Ensuring networks are active...
	I0920 22:04:58.910592   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Ensuring network default is active
	I0920 22:04:58.910942   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Ensuring network mk-kubernetes-upgrade-696728 is active
	I0920 22:04:58.911509   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Getting domain xml...
	I0920 22:04:58.912321   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Creating domain...
	I0920 22:05:00.204426   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Waiting to get IP...
	I0920 22:05:00.205376   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:00.205756   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:00.205839   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:00.205762   53920 retry.go:31] will retry after 267.953828ms: waiting for machine to come up
	I0920 22:05:00.475496   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:00.476006   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:00.476037   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:00.475957   53920 retry.go:31] will retry after 345.090749ms: waiting for machine to come up
	I0920 22:05:00.822555   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:00.823061   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:00.823088   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:00.823030   53920 retry.go:31] will retry after 322.993407ms: waiting for machine to come up
	I0920 22:05:01.147628   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:01.148106   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:01.148134   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:01.148058   53920 retry.go:31] will retry after 528.446596ms: waiting for machine to come up
	I0920 22:05:01.677740   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:01.678342   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:01.678366   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:01.678308   53920 retry.go:31] will retry after 491.442766ms: waiting for machine to come up
	I0920 22:05:02.171016   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:02.171492   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:02.171543   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:02.171472   53920 retry.go:31] will retry after 754.691434ms: waiting for machine to come up
	I0920 22:05:02.927519   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:02.927996   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:02.928023   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:02.927932   53920 retry.go:31] will retry after 757.327186ms: waiting for machine to come up
	I0920 22:05:03.687352   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:03.687870   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:03.687897   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:03.687823   53920 retry.go:31] will retry after 953.094685ms: waiting for machine to come up
	I0920 22:05:04.642350   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:04.642836   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:04.642863   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:04.642784   53920 retry.go:31] will retry after 1.454386492s: waiting for machine to come up
	I0920 22:05:06.098955   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:06.099385   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:06.099410   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:06.099348   53920 retry.go:31] will retry after 2.297498679s: waiting for machine to come up
	I0920 22:05:08.399753   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:08.400218   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:08.400246   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:08.400171   53920 retry.go:31] will retry after 2.598561174s: waiting for machine to come up
	I0920 22:05:11.000485   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:11.001035   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:11.001061   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:11.000994   53920 retry.go:31] will retry after 3.12053232s: waiting for machine to come up
	I0920 22:05:14.123557   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:14.123970   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:14.124000   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:14.123932   53920 retry.go:31] will retry after 3.727234673s: waiting for machine to come up
	I0920 22:05:17.855750   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:17.856224   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find current IP address of domain kubernetes-upgrade-696728 in network mk-kubernetes-upgrade-696728
	I0920 22:05:17.856245   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | I0920 22:05:17.856188   53920 retry.go:31] will retry after 4.92091824s: waiting for machine to come up
	I0920 22:05:22.778335   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:22.778784   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Found IP for machine: 192.168.50.192
	I0920 22:05:22.778815   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has current primary IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:22.778824   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Reserving static IP address...
	I0920 22:05:22.779256   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-696728", mac: "52:54:00:c0:af:5b", ip: "192.168.50.192"} in network mk-kubernetes-upgrade-696728
	I0920 22:05:22.856006   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Getting to WaitForSSH function...
	I0920 22:05:22.856034   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Reserved static IP address: 192.168.50.192
	I0920 22:05:22.856071   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Waiting for SSH to be available...
	I0920 22:05:22.858550   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:22.858948   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:22.858992   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:22.859107   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Using SSH client type: external
	I0920 22:05:22.859130   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728/id_rsa (-rw-------)
	I0920 22:05:22.859181   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.192 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:05:22.859198   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | About to run SSH command:
	I0920 22:05:22.859217   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | exit 0
	I0920 22:05:22.991172   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | SSH cmd err, output: <nil>: 
	I0920 22:05:22.991427   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) KVM machine creation complete!
	I0920 22:05:22.991743   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetConfigRaw
	I0920 22:05:22.992330   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .DriverName
	I0920 22:05:22.992483   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .DriverName
	I0920 22:05:22.992623   53566 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 22:05:22.992638   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetState
	I0920 22:05:22.993759   53566 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 22:05:22.993774   53566 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 22:05:22.993781   53566 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 22:05:22.993789   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHHostname
	I0920 22:05:22.995931   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:22.996227   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:22.996253   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:22.996404   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHPort
	I0920 22:05:22.996585   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:22.996708   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:22.996821   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHUsername
	I0920 22:05:22.996940   53566 main.go:141] libmachine: Using SSH client type: native
	I0920 22:05:22.997126   53566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0920 22:05:22.997136   53566 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 22:05:23.115064   53566 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:05:23.115092   53566 main.go:141] libmachine: Detecting the provisioner...
	I0920 22:05:23.115102   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHHostname
	I0920 22:05:23.118189   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.118574   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:23.118596   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.118747   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHPort
	I0920 22:05:23.118930   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:23.119078   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:23.119185   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHUsername
	I0920 22:05:23.119369   53566 main.go:141] libmachine: Using SSH client type: native
	I0920 22:05:23.119554   53566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0920 22:05:23.119566   53566 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 22:05:23.235902   53566 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 22:05:23.235969   53566 main.go:141] libmachine: found compatible host: buildroot
	I0920 22:05:23.235976   53566 main.go:141] libmachine: Provisioning with buildroot...
	I0920 22:05:23.235983   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetMachineName
	I0920 22:05:23.236208   53566 buildroot.go:166] provisioning hostname "kubernetes-upgrade-696728"
	I0920 22:05:23.236233   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetMachineName
	I0920 22:05:23.236423   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHHostname
	I0920 22:05:23.239193   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.239610   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:23.239640   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.239768   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHPort
	I0920 22:05:23.239940   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:23.240103   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:23.240227   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHUsername
	I0920 22:05:23.240477   53566 main.go:141] libmachine: Using SSH client type: native
	I0920 22:05:23.240640   53566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0920 22:05:23.240653   53566 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-696728 && echo "kubernetes-upgrade-696728" | sudo tee /etc/hostname
	I0920 22:05:23.369759   53566 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-696728
	
	I0920 22:05:23.369791   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHHostname
	I0920 22:05:23.372955   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.373333   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:23.373366   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.373532   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHPort
	I0920 22:05:23.373732   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:23.373922   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:23.374056   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHUsername
	I0920 22:05:23.374214   53566 main.go:141] libmachine: Using SSH client type: native
	I0920 22:05:23.374416   53566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0920 22:05:23.374442   53566 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-696728' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-696728/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-696728' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:05:23.496354   53566 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:05:23.496380   53566 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:05:23.496413   53566 buildroot.go:174] setting up certificates
	I0920 22:05:23.496423   53566 provision.go:84] configureAuth start
	I0920 22:05:23.496431   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetMachineName
	I0920 22:05:23.496685   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetIP
	I0920 22:05:23.499406   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.499891   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:23.499921   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.500096   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHHostname
	I0920 22:05:23.502184   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.502478   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:23.502528   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.502582   53566 provision.go:143] copyHostCerts
	I0920 22:05:23.502646   53566 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:05:23.502669   53566 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:05:23.502745   53566 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:05:23.502850   53566 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:05:23.502860   53566 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:05:23.502886   53566 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:05:23.502955   53566 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:05:23.502978   53566 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:05:23.503007   53566 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:05:23.503070   53566 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-696728 san=[127.0.0.1 192.168.50.192 kubernetes-upgrade-696728 localhost minikube]
	I0920 22:05:23.639387   53566 provision.go:177] copyRemoteCerts
	I0920 22:05:23.639453   53566 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:05:23.639475   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHHostname
	I0920 22:05:23.642097   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.642402   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:23.642433   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.642579   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHPort
	I0920 22:05:23.642743   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:23.642872   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHUsername
	I0920 22:05:23.642998   53566 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728/id_rsa Username:docker}
	I0920 22:05:23.729542   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0920 22:05:23.754680   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:05:23.779359   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:05:23.803949   53566 provision.go:87] duration metric: took 307.515488ms to configureAuth
	I0920 22:05:23.803979   53566 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:05:23.804185   53566 config.go:182] Loaded profile config "kubernetes-upgrade-696728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:05:23.804306   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHHostname
	I0920 22:05:23.806672   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.806987   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:23.807017   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:23.807241   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHPort
	I0920 22:05:23.807421   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:23.807569   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:23.807712   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHUsername
	I0920 22:05:23.807856   53566 main.go:141] libmachine: Using SSH client type: native
	I0920 22:05:23.808008   53566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0920 22:05:23.808028   53566 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:05:24.034442   53566 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:05:24.034463   53566 main.go:141] libmachine: Checking connection to Docker...
	I0920 22:05:24.034471   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetURL
	I0920 22:05:24.035735   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Using libvirt version 6000000
	I0920 22:05:24.038061   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.038391   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:24.038421   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.038569   53566 main.go:141] libmachine: Docker is up and running!
	I0920 22:05:24.038584   53566 main.go:141] libmachine: Reticulating splines...
	I0920 22:05:24.038593   53566 client.go:171] duration metric: took 25.601139234s to LocalClient.Create
	I0920 22:05:24.038617   53566 start.go:167] duration metric: took 25.60120871s to libmachine.API.Create "kubernetes-upgrade-696728"
	I0920 22:05:24.038629   53566 start.go:293] postStartSetup for "kubernetes-upgrade-696728" (driver="kvm2")
	I0920 22:05:24.038641   53566 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:05:24.038662   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .DriverName
	I0920 22:05:24.038878   53566 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:05:24.038900   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHHostname
	I0920 22:05:24.041179   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.041622   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:24.041655   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.041830   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHPort
	I0920 22:05:24.042074   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:24.042248   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHUsername
	I0920 22:05:24.042396   53566 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728/id_rsa Username:docker}
	I0920 22:05:24.130167   53566 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:05:24.134372   53566 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:05:24.134393   53566 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:05:24.134455   53566 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:05:24.134543   53566 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:05:24.134654   53566 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:05:24.144680   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:05:24.169704   53566 start.go:296] duration metric: took 131.058008ms for postStartSetup
	I0920 22:05:24.169769   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetConfigRaw
	I0920 22:05:24.170402   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetIP
	I0920 22:05:24.172985   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.173300   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:24.173323   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.173541   53566 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/config.json ...
	I0920 22:05:24.173747   53566 start.go:128] duration metric: took 25.757134973s to createHost
	I0920 22:05:24.173777   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHHostname
	I0920 22:05:24.175894   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.176227   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:24.176249   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.176376   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHPort
	I0920 22:05:24.176549   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:24.176687   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:24.176811   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHUsername
	I0920 22:05:24.176951   53566 main.go:141] libmachine: Using SSH client type: native
	I0920 22:05:24.177150   53566 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.192 22 <nil> <nil>}
	I0920 22:05:24.177163   53566 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:05:24.291970   53566 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726869924.265880318
	
	I0920 22:05:24.291990   53566 fix.go:216] guest clock: 1726869924.265880318
	I0920 22:05:24.292004   53566 fix.go:229] Guest: 2024-09-20 22:05:24.265880318 +0000 UTC Remote: 2024-09-20 22:05:24.173759701 +0000 UTC m=+54.675202099 (delta=92.120617ms)
	I0920 22:05:24.292061   53566 fix.go:200] guest clock delta is within tolerance: 92.120617ms
	I0920 22:05:24.292070   53566 start.go:83] releasing machines lock for "kubernetes-upgrade-696728", held for 25.875626963s
	I0920 22:05:24.292105   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .DriverName
	I0920 22:05:24.292368   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetIP
	I0920 22:05:24.295472   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.295888   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:24.295918   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.296127   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .DriverName
	I0920 22:05:24.296703   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .DriverName
	I0920 22:05:24.296888   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .DriverName
	I0920 22:05:24.296979   53566 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:05:24.297019   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHHostname
	I0920 22:05:24.297153   53566 ssh_runner.go:195] Run: cat /version.json
	I0920 22:05:24.297179   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHHostname
	I0920 22:05:24.300149   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.300194   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.300551   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:24.300584   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.300623   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:24.300642   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:24.300909   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHPort
	I0920 22:05:24.300944   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHPort
	I0920 22:05:24.301176   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:24.301195   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:05:24.301375   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHUsername
	I0920 22:05:24.301376   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHUsername
	I0920 22:05:24.301544   53566 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728/id_rsa Username:docker}
	I0920 22:05:24.301554   53566 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728/id_rsa Username:docker}
	I0920 22:05:24.416070   53566 ssh_runner.go:195] Run: systemctl --version
	I0920 22:05:24.423110   53566 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:05:24.589849   53566 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:05:24.596352   53566 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:05:24.596423   53566 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:05:24.613731   53566 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:05:24.613754   53566 start.go:495] detecting cgroup driver to use...
	I0920 22:05:24.613819   53566 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:05:24.632207   53566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:05:24.648023   53566 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:05:24.648081   53566 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:05:24.663216   53566 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:05:24.677625   53566 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:05:24.809739   53566 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:05:24.961426   53566 docker.go:233] disabling docker service ...
	I0920 22:05:24.961497   53566 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:05:24.977155   53566 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:05:24.991656   53566 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:05:25.134878   53566 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:05:25.285914   53566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:05:25.300489   53566 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:05:25.320841   53566 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 22:05:25.320902   53566 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:05:25.331729   53566 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:05:25.331796   53566 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:05:25.344466   53566 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:05:25.357577   53566 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:05:25.369059   53566 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:05:25.380704   53566 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:05:25.391105   53566 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:05:25.391163   53566 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:05:25.405828   53566 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:05:25.416218   53566 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:05:25.547554   53566 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:05:25.660737   53566 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:05:25.660821   53566 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:05:25.666124   53566 start.go:563] Will wait 60s for crictl version
	I0920 22:05:25.666186   53566 ssh_runner.go:195] Run: which crictl
	I0920 22:05:25.670220   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:05:25.708799   53566 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:05:25.708882   53566 ssh_runner.go:195] Run: crio --version
	I0920 22:05:25.742725   53566 ssh_runner.go:195] Run: crio --version
	I0920 22:05:25.782804   53566 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 22:05:25.784297   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetIP
	I0920 22:05:25.788015   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:25.788531   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:05:13 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:05:25.788600   53566 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:05:25.788897   53566 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 22:05:25.793805   53566 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:05:25.808069   53566 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-696728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-696728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.192 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:05:25.808213   53566 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:05:25.808275   53566 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:05:25.859575   53566 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:05:25.859651   53566 ssh_runner.go:195] Run: which lz4
	I0920 22:05:25.863700   53566 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:05:25.867936   53566 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:05:25.867970   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 22:05:27.659704   53566 crio.go:462] duration metric: took 1.796042578s to copy over tarball
	I0920 22:05:27.659785   53566 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:05:30.833876   53566 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.174050818s)
	I0920 22:05:30.833907   53566 crio.go:469] duration metric: took 3.174173637s to extract the tarball
	I0920 22:05:30.833916   53566 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:05:30.887344   53566 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:05:30.951171   53566 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:05:30.951197   53566 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:05:30.951293   53566 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:05:30.951329   53566 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 22:05:30.951345   53566 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:05:30.951322   53566 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:05:30.951372   53566 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:05:30.951381   53566 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 22:05:30.951349   53566 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:05:30.951304   53566 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:05:30.953025   53566 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 22:05:30.953061   53566 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:05:30.953072   53566 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:05:30.953097   53566 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:05:30.953114   53566 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:05:30.953205   53566 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:05:30.953248   53566 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:05:30.953276   53566 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 22:05:31.115000   53566 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:05:31.115883   53566 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 22:05:31.117252   53566 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:05:31.124093   53566 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:05:31.129268   53566 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 22:05:31.129333   53566 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:05:31.174131   53566 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 22:05:31.253535   53566 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 22:05:31.253578   53566 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:05:31.253625   53566 ssh_runner.go:195] Run: which crictl
	I0920 22:05:31.268443   53566 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 22:05:31.268494   53566 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:05:31.268545   53566 ssh_runner.go:195] Run: which crictl
	I0920 22:05:31.277591   53566 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 22:05:31.277638   53566 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:05:31.277688   53566 ssh_runner.go:195] Run: which crictl
	I0920 22:05:31.295388   53566 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 22:05:31.295446   53566 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:05:31.295530   53566 ssh_runner.go:195] Run: which crictl
	I0920 22:05:31.322248   53566 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 22:05:31.322297   53566 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:05:31.322301   53566 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 22:05:31.322346   53566 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 22:05:31.322350   53566 ssh_runner.go:195] Run: which crictl
	I0920 22:05:31.322395   53566 ssh_runner.go:195] Run: which crictl
	I0920 22:05:31.330266   53566 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 22:05:31.330359   53566 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 22:05:31.330403   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:05:31.330441   53566 ssh_runner.go:195] Run: which crictl
	I0920 22:05:31.330366   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:05:31.330326   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:05:31.330534   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:05:31.330605   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:05:31.333896   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:05:31.487053   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:05:31.487468   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:05:31.490756   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:05:31.490825   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:05:31.490921   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:05:31.491062   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:05:31.496141   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:05:31.626791   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:05:31.652690   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:05:31.669934   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:05:31.669937   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:05:31.670033   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:05:31.670158   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:05:31.670429   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:05:31.735693   53566 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:05:31.779030   53566 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 22:05:31.839629   53566 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 22:05:31.839689   53566 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 22:05:31.839990   53566 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 22:05:31.840046   53566 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 22:05:31.840129   53566 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 22:05:31.840860   53566 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:05:31.866374   53566 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 22:05:31.999504   53566 cache_images.go:92] duration metric: took 1.048286994s to LoadCachedImages
	W0920 22:05:31.999603   53566 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0920 22:05:31.999619   53566 kubeadm.go:934] updating node { 192.168.50.192 8443 v1.20.0 crio true true} ...
	I0920 22:05:31.999718   53566 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-696728 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-696728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:05:31.999783   53566 ssh_runner.go:195] Run: crio config
	I0920 22:05:32.051684   53566 cni.go:84] Creating CNI manager for ""
	I0920 22:05:32.051715   53566 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:05:32.051728   53566 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:05:32.051754   53566 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.192 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-696728 NodeName:kubernetes-upgrade-696728 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.192"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.192 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 22:05:32.051940   53566 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.192
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-696728"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.192
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.192"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:05:32.052016   53566 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 22:05:32.063131   53566 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:05:32.063207   53566 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:05:32.073958   53566 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0920 22:05:32.097052   53566 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:05:32.119078   53566 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0920 22:05:32.140245   53566 ssh_runner.go:195] Run: grep 192.168.50.192	control-plane.minikube.internal$ /etc/hosts
	I0920 22:05:32.144288   53566 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.192	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:05:32.158853   53566 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:05:32.299190   53566 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:05:32.322198   53566 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728 for IP: 192.168.50.192
	I0920 22:05:32.322234   53566 certs.go:194] generating shared ca certs ...
	I0920 22:05:32.322264   53566 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:05:32.322447   53566 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:05:32.322499   53566 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:05:32.322514   53566 certs.go:256] generating profile certs ...
	I0920 22:05:32.322581   53566 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.key
	I0920 22:05:32.322600   53566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.crt with IP's: []
	I0920 22:05:32.463213   53566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.crt ...
	I0920 22:05:32.463246   53566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.crt: {Name:mk7667320132a3772874c61206906cd0d2555b12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:05:32.463471   53566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.key ...
	I0920 22:05:32.463496   53566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.key: {Name:mk6b28aea4c661ac599d0fcba547b7806fae058f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:05:32.463641   53566 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/apiserver.key.33955ac0
	I0920 22:05:32.463660   53566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/apiserver.crt.33955ac0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.192]
	I0920 22:05:32.862557   53566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/apiserver.crt.33955ac0 ...
	I0920 22:05:32.862588   53566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/apiserver.crt.33955ac0: {Name:mk04fa7bc177f669f3f78d2e9777b81029b18668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:05:32.862752   53566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/apiserver.key.33955ac0 ...
	I0920 22:05:32.862775   53566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/apiserver.key.33955ac0: {Name:mkfeaf0ecfdc36a857a0568609667452ae460827 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:05:32.862903   53566 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/apiserver.crt.33955ac0 -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/apiserver.crt
	I0920 22:05:32.863037   53566 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/apiserver.key.33955ac0 -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/apiserver.key
	I0920 22:05:32.863101   53566 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/proxy-client.key
	I0920 22:05:32.863116   53566 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/proxy-client.crt with IP's: []
	I0920 22:05:33.036807   53566 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/proxy-client.crt ...
	I0920 22:05:33.036841   53566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/proxy-client.crt: {Name:mkc87adbbcdd1e9b390f81bf261b8b25c57e1d87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:05:33.037024   53566 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/proxy-client.key ...
	I0920 22:05:33.037040   53566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/proxy-client.key: {Name:mk6d06a22295e1a025d1f7edcf0d5f90ffb029be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:05:33.037234   53566 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:05:33.037286   53566 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:05:33.037302   53566 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:05:33.037336   53566 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:05:33.037371   53566 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:05:33.037403   53566 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:05:33.037456   53566 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:05:33.038053   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:05:33.079099   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:05:33.114345   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:05:33.178150   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:05:33.275825   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0920 22:05:33.359706   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:05:33.393462   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:05:33.424253   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 22:05:33.462285   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:05:33.490522   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:05:33.521347   53566 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:05:33.569001   53566 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:05:33.593985   53566 ssh_runner.go:195] Run: openssl version
	I0920 22:05:33.601668   53566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:05:33.617822   53566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:05:33.624833   53566 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:05:33.624910   53566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:05:33.632747   53566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:05:33.645979   53566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:05:33.661478   53566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:05:33.667651   53566 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:05:33.667715   53566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:05:33.675629   53566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:05:33.692748   53566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:05:33.707626   53566 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:05:33.713914   53566 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:05:33.713992   53566 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:05:33.721517   53566 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:05:33.735020   53566 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:05:33.740211   53566 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 22:05:33.740282   53566 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-696728 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-696728 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.192 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:05:33.740382   53566 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:05:33.740439   53566 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:05:33.789060   53566 cri.go:89] found id: ""
	I0920 22:05:33.789140   53566 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:05:33.802829   53566 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:05:33.817189   53566 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:05:33.832071   53566 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:05:33.832093   53566 kubeadm.go:157] found existing configuration files:
	
	I0920 22:05:33.832150   53566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:05:33.844982   53566 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:05:33.845056   53566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:05:33.856318   53566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:05:33.870514   53566 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:05:33.870581   53566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:05:33.882657   53566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:05:33.894755   53566 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:05:33.894816   53566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:05:33.908471   53566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:05:33.920155   53566 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:05:33.920214   53566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:05:33.936559   53566 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:05:34.298821   53566 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:07:32.439660   53566 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:07:32.439760   53566 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 22:07:32.441185   53566 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:07:32.441240   53566 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:07:32.441369   53566 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:07:32.441472   53566 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:07:32.441599   53566 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:07:32.441679   53566 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:07:32.443759   53566 out.go:235]   - Generating certificates and keys ...
	I0920 22:07:32.443843   53566 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:07:32.443924   53566 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:07:32.444056   53566 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 22:07:32.444153   53566 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 22:07:32.444250   53566 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 22:07:32.444331   53566 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 22:07:32.444409   53566 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 22:07:32.444593   53566 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-696728 localhost] and IPs [192.168.50.192 127.0.0.1 ::1]
	I0920 22:07:32.444669   53566 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 22:07:32.444821   53566 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-696728 localhost] and IPs [192.168.50.192 127.0.0.1 ::1]
	I0920 22:07:32.444923   53566 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 22:07:32.445034   53566 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 22:07:32.445102   53566 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 22:07:32.445177   53566 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:07:32.445274   53566 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:07:32.445351   53566 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:07:32.445430   53566 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:07:32.445510   53566 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:07:32.445647   53566 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:07:32.445761   53566 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:07:32.445826   53566 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:07:32.445923   53566 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:07:32.447314   53566 out.go:235]   - Booting up control plane ...
	I0920 22:07:32.447421   53566 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:07:32.447535   53566 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:07:32.447629   53566 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:07:32.447756   53566 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:07:32.447965   53566 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:07:32.448014   53566 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:07:32.448068   53566 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:07:32.448249   53566 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:07:32.448325   53566 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:07:32.448525   53566 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:07:32.448633   53566 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:07:32.448936   53566 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:07:32.449036   53566 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:07:32.449259   53566 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:07:32.449361   53566 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:07:32.449622   53566 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:07:32.449632   53566 kubeadm.go:310] 
	I0920 22:07:32.449691   53566 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:07:32.449747   53566 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:07:32.449756   53566 kubeadm.go:310] 
	I0920 22:07:32.449806   53566 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:07:32.449851   53566 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:07:32.450000   53566 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:07:32.450010   53566 kubeadm.go:310] 
	I0920 22:07:32.450157   53566 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:07:32.450204   53566 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:07:32.450252   53566 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:07:32.450260   53566 kubeadm.go:310] 
	I0920 22:07:32.450383   53566 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:07:32.450518   53566 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:07:32.450537   53566 kubeadm.go:310] 
	I0920 22:07:32.450684   53566 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:07:32.450814   53566 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:07:32.450936   53566 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:07:32.451068   53566 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:07:32.451098   53566 kubeadm.go:310] 
	W0920 22:07:32.451237   53566 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-696728 localhost] and IPs [192.168.50.192 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-696728 localhost] and IPs [192.168.50.192 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-696728 localhost] and IPs [192.168.50.192 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-696728 localhost] and IPs [192.168.50.192 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 22:07:32.451302   53566 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:07:33.999115   53566 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.547776775s)
	I0920 22:07:33.999204   53566 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:07:34.021672   53566 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:07:34.037591   53566 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:07:34.037613   53566 kubeadm.go:157] found existing configuration files:
	
	I0920 22:07:34.037667   53566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:07:34.052343   53566 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:07:34.052417   53566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:07:34.068688   53566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:07:34.080983   53566 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:07:34.081044   53566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:07:34.093130   53566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:07:34.109150   53566 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:07:34.109218   53566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:07:34.119955   53566 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:07:34.135894   53566 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:07:34.135969   53566 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:07:34.151002   53566 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:07:34.252980   53566 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:07:34.253050   53566 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:07:34.449885   53566 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:07:34.450009   53566 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:07:34.450104   53566 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:07:34.682597   53566 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:07:34.684645   53566 out.go:235]   - Generating certificates and keys ...
	I0920 22:07:34.684762   53566 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:07:34.684859   53566 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:07:34.684966   53566 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:07:34.685101   53566 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:07:34.685231   53566 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:07:34.685317   53566 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:07:34.686122   53566 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:07:34.686937   53566 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:07:34.687726   53566 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:07:34.688277   53566 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:07:34.693368   53566 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:07:34.693461   53566 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:07:35.051582   53566 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:07:35.302160   53566 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:07:35.430267   53566 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:07:35.714430   53566 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:07:35.732823   53566 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:07:35.734246   53566 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:07:35.734327   53566 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:07:35.912795   53566 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:07:35.916062   53566 out.go:235]   - Booting up control plane ...
	I0920 22:07:35.916212   53566 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:07:35.926581   53566 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:07:35.927876   53566 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:07:35.928985   53566 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:07:35.932458   53566 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:08:15.935654   53566 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:08:15.935948   53566 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:08:15.936227   53566 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:08:20.936974   53566 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:08:20.937239   53566 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:08:30.937454   53566 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:08:30.937710   53566 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:08:50.936420   53566 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:08:50.936715   53566 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:09:30.936903   53566 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:09:30.937117   53566 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:09:30.937155   53566 kubeadm.go:310] 
	I0920 22:09:30.937225   53566 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:09:30.937275   53566 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:09:30.937290   53566 kubeadm.go:310] 
	I0920 22:09:30.937344   53566 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:09:30.937394   53566 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:09:30.937542   53566 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:09:30.937565   53566 kubeadm.go:310] 
	I0920 22:09:30.937722   53566 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:09:30.937774   53566 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:09:30.937838   53566 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:09:30.937852   53566 kubeadm.go:310] 
	I0920 22:09:30.937997   53566 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:09:30.938132   53566 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:09:30.938151   53566 kubeadm.go:310] 
	I0920 22:09:30.938324   53566 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:09:30.938452   53566 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:09:30.938563   53566 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:09:30.938666   53566 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:09:30.938678   53566 kubeadm.go:310] 
	I0920 22:09:30.938997   53566 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:09:30.939124   53566 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:09:30.939217   53566 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 22:09:30.939300   53566 kubeadm.go:394] duration metric: took 3m57.199022694s to StartCluster
	I0920 22:09:30.939347   53566 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:09:30.939401   53566 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:09:30.990590   53566 cri.go:89] found id: ""
	I0920 22:09:30.990624   53566 logs.go:276] 0 containers: []
	W0920 22:09:30.990636   53566 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:09:30.990645   53566 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:09:30.990719   53566 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:09:31.032152   53566 cri.go:89] found id: ""
	I0920 22:09:31.032182   53566 logs.go:276] 0 containers: []
	W0920 22:09:31.032190   53566 logs.go:278] No container was found matching "etcd"
	I0920 22:09:31.032196   53566 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:09:31.032250   53566 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:09:31.070980   53566 cri.go:89] found id: ""
	I0920 22:09:31.071010   53566 logs.go:276] 0 containers: []
	W0920 22:09:31.071018   53566 logs.go:278] No container was found matching "coredns"
	I0920 22:09:31.071025   53566 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:09:31.071086   53566 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:09:31.110808   53566 cri.go:89] found id: ""
	I0920 22:09:31.110838   53566 logs.go:276] 0 containers: []
	W0920 22:09:31.110850   53566 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:09:31.110858   53566 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:09:31.110923   53566 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:09:31.147149   53566 cri.go:89] found id: ""
	I0920 22:09:31.147177   53566 logs.go:276] 0 containers: []
	W0920 22:09:31.147187   53566 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:09:31.147203   53566 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:09:31.147274   53566 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:09:31.189786   53566 cri.go:89] found id: ""
	I0920 22:09:31.189819   53566 logs.go:276] 0 containers: []
	W0920 22:09:31.189830   53566 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:09:31.189838   53566 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:09:31.189902   53566 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:09:31.224611   53566 cri.go:89] found id: ""
	I0920 22:09:31.224643   53566 logs.go:276] 0 containers: []
	W0920 22:09:31.224654   53566 logs.go:278] No container was found matching "kindnet"
	I0920 22:09:31.224664   53566 logs.go:123] Gathering logs for container status ...
	I0920 22:09:31.224680   53566 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:09:31.272071   53566 logs.go:123] Gathering logs for kubelet ...
	I0920 22:09:31.272111   53566 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:09:31.324041   53566 logs.go:123] Gathering logs for dmesg ...
	I0920 22:09:31.324076   53566 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:09:31.337820   53566 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:09:31.337859   53566 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:09:31.488677   53566 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:09:31.488717   53566 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:09:31.488732   53566 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0920 22:09:31.603758   53566 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 22:09:31.603832   53566 out.go:270] * 
	* 
	W0920 22:09:31.603892   53566 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:09:31.603903   53566 out.go:270] * 
	* 
	W0920 22:09:31.604783   53566 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 22:09:31.608225   53566 out.go:201] 
	W0920 22:09:31.609690   53566 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:09:31.609735   53566 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 22:09:31.609756   53566 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 22:09:31.611245   53566 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-696728 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-696728
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-696728: (1.443844578s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-696728 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-696728 status --format={{.Host}}: exit status 7 (73.901519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-696728 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-696728 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (53.847962004s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-696728 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-696728 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-696728 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (89.98277ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-696728] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-696728
	    minikube start -p kubernetes-upgrade-696728 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6967282 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-696728 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-696728 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-696728 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.216009037s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-09-20 22:11:15.426700056 +0000 UTC m=+5015.557835038
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-696728 -n kubernetes-upgrade-696728
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-696728 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-696728 logs -n 25: (3.635301789s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-833441 sudo cat                            | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo                                | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo                                | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo cat                            | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo docker                         | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo                                | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo                                | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo cat                            | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo cat                            | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo                                | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo                                | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo                                | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo cat                            | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo cat                            | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo                                | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo                                | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo                                | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo find                           | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p cilium-833441 sudo crio                           | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p cilium-833441                                     | cilium-833441          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| start   | -p old-k8s-version-593412                            | old-k8s-version-593412 | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	|         | --container-runtime=crio                             |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                        |         |         |                     |                     |
	| ssh     | cert-options-335786 ssh                              | cert-options-335786    | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	|         | openssl x509 -text -noout -in                        |                        |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                        |         |         |                     |                     |
	| ssh     | -p cert-options-335786 -- sudo                       | cert-options-335786    | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                        |         |         |                     |                     |
	| delete  | -p cert-options-335786                               | cert-options-335786    | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| start   | -p no-preload-032884                                 | no-preload-032884      | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                        |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 22:10:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 22:10:56.695004   61652 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:10:56.695148   61652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:10:56.695157   61652 out.go:358] Setting ErrFile to fd 2...
	I0920 22:10:56.695164   61652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:10:56.695434   61652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:10:56.696234   61652 out.go:352] Setting JSON to false
	I0920 22:10:56.697534   61652 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6805,"bootTime":1726863452,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:10:56.697662   61652 start.go:139] virtualization: kvm guest
	I0920 22:10:56.700124   61652 out.go:177] * [no-preload-032884] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:10:56.701486   61652 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:10:56.701532   61652 notify.go:220] Checking for updates...
	I0920 22:10:56.704350   61652 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:10:56.705819   61652 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:10:56.707092   61652 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:10:56.708193   61652 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:10:56.709251   61652 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:10:56.710927   61652 config.go:182] Loaded profile config "cert-expiration-673228": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:10:56.711074   61652 config.go:182] Loaded profile config "kubernetes-upgrade-696728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:10:56.711230   61652 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:10:56.711342   61652 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:10:56.749540   61652 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 22:10:56.750654   61652 start.go:297] selected driver: kvm2
	I0920 22:10:56.750671   61652 start.go:901] validating driver "kvm2" against <nil>
	I0920 22:10:56.750686   61652 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:10:56.751743   61652 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:10:56.751841   61652 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 22:10:56.768041   61652 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 22:10:56.768105   61652 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 22:10:56.768443   61652 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:10:56.768485   61652 cni.go:84] Creating CNI manager for ""
	I0920 22:10:56.768543   61652 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:10:56.768555   61652 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 22:10:56.768634   61652 start.go:340] cluster config:
	{Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:10:56.768784   61652 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:10:56.771000   61652 out.go:177] * Starting "no-preload-032884" primary control-plane node in "no-preload-032884" cluster
	I0920 22:10:57.406775   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:57.407466   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:10:57.407496   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:57.407418   61331 retry.go:31] will retry after 3.169825194s: waiting for machine to come up
	I0920 22:11:00.580086   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:00.580574   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:11:00.580600   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:11:00.580532   61331 retry.go:31] will retry after 3.614679602s: waiting for machine to come up
	I0920 22:10:56.772432   61652 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:10:56.772580   61652 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/config.json ...
	I0920 22:10:56.772621   61652 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/config.json: {Name:mk194a36f24eb24c23a23511023bb0edcbfc87d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:10:56.772647   61652 cache.go:107] acquiring lock: {Name:mk045114c6fb2512dba6de4a3cc96dc50c685b4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:10:56.772679   61652 cache.go:107] acquiring lock: {Name:mk7d9c509153cd1b632c0e309c7a923523c3875c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:10:56.772732   61652 cache.go:115] /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0920 22:10:56.772744   61652 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.679µs
	I0920 22:10:56.772687   61652 cache.go:107] acquiring lock: {Name:mkb71792d2e6c90a3979fa0c42866e209469704d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:10:56.772759   61652 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0920 22:10:56.772740   61652 cache.go:107] acquiring lock: {Name:mkd10ec35ed3e83bf2f189f51911a1f29f830071 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:10:56.772774   61652 cache.go:107] acquiring lock: {Name:mkb038e0c96483bdfd3eb5009c0de223b6457545 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:10:56.772838   61652 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 22:10:56.772842   61652 cache.go:107] acquiring lock: {Name:mk7a0632f440fe4a26b4c425e3fb8edda1dff89e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:10:56.772872   61652 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:10:56.772877   61652 start.go:360] acquireMachinesLock for no-preload-032884: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:10:56.772968   61652 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:10:56.772655   61652 cache.go:107] acquiring lock: {Name:mk497b29f4f5eadd04d7eaede46fdef23b8e1ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:10:56.773069   61652 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:10:56.773128   61652 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:10:56.773197   61652 cache.go:107] acquiring lock: {Name:mk209f191101e0e91a042643509bea586319a45e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:10:56.773212   61652 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:10:56.773298   61652 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:10:56.774489   61652 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:10:56.774494   61652 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:10:56.774545   61652 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 22:10:56.774487   61652 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:10:56.774642   61652 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:10:56.774727   61652 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:10:56.774836   61652 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:10:56.936406   61652 cache.go:162] opening:  /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 22:10:56.949762   61652 cache.go:162] opening:  /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 22:10:56.953972   61652 cache.go:162] opening:  /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 22:10:56.957099   61652 cache.go:162] opening:  /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 22:10:56.988475   61652 cache.go:162] opening:  /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0920 22:10:56.999646   61652 cache.go:162] opening:  /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 22:10:57.002993   61652 cache.go:162] opening:  /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 22:10:57.061653   61652 cache.go:157] /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0920 22:10:57.061680   61652 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 289.007568ms
	I0920 22:10:57.061693   61652 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0920 22:10:57.449946   61652 cache.go:157] /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0920 22:10:57.449979   61652 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 677.20563ms
	I0920 22:10:57.449994   61652 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0920 22:10:58.593709   61652 cache.go:157] /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0920 22:10:58.593750   61652 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 1.821116183s
	I0920 22:10:58.593769   61652 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0920 22:10:58.700877   61652 cache.go:157] /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0920 22:10:58.700905   61652 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 1.928219368s
	I0920 22:10:58.700918   61652 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0920 22:10:58.717721   61652 cache.go:157] /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0920 22:10:58.717753   61652 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 1.944990375s
	I0920 22:10:58.717767   61652 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0920 22:10:58.787594   61652 cache.go:157] /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0920 22:10:58.787627   61652 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 2.01489994s
	I0920 22:10:58.787642   61652 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0920 22:10:58.983471   61652 cache.go:157] /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0920 22:10:58.983499   61652 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 2.210304967s
	I0920 22:10:58.983513   61652 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0920 22:10:58.983532   61652 cache.go:87] Successfully saved all images to host disk.
	I0920 22:11:04.196429   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:04.196864   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:11:04.196894   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:11:04.196792   61331 retry.go:31] will retry after 4.148422946s: waiting for machine to come up
	I0920 22:11:06.512141   60373 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 8cce8796663c686272b664b73dceacf806cb551effaab61888969c0da2e6f730 b5a73bcc709e9f29df914100696fb3c096111afd0a6fb981ac56f1ab141c5a03 85fa58375ff433726090b9dedcf6590989dc66511ec541f2f362e2bf6fdc5963 7952e86f7d00de30a7c62f095aa7dddb762bb03d6d72802b3435f29862ae0c08 6c4580ed2d058b492f22258879803533a35fb46da3d06804d95af6208c6696c8 cfeabc550d344114be4dd61f45c469e70f4aeab71938db624415ee89f098a929 466037316faafd9c1a59b4f684ba72778a91e1ce50a55abb6b6426e8c362c6ff 01f9b917b1e588a93c86cac9e1f354e10e85563b9d77a3cfee355d83f77afbbd eb6f6b5ff81d349a7b530160eae82c29853cc0a9626dcac0308900db9abe8b4b e2086d021c2f398c99830853e13e4aa3fe2aa056945e3cf93e2815171854ed44: (14.832432879s)
	W0920 22:11:06.512231   60373 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 8cce8796663c686272b664b73dceacf806cb551effaab61888969c0da2e6f730 b5a73bcc709e9f29df914100696fb3c096111afd0a6fb981ac56f1ab141c5a03 85fa58375ff433726090b9dedcf6590989dc66511ec541f2f362e2bf6fdc5963 7952e86f7d00de30a7c62f095aa7dddb762bb03d6d72802b3435f29862ae0c08 6c4580ed2d058b492f22258879803533a35fb46da3d06804d95af6208c6696c8 cfeabc550d344114be4dd61f45c469e70f4aeab71938db624415ee89f098a929 466037316faafd9c1a59b4f684ba72778a91e1ce50a55abb6b6426e8c362c6ff 01f9b917b1e588a93c86cac9e1f354e10e85563b9d77a3cfee355d83f77afbbd eb6f6b5ff81d349a7b530160eae82c29853cc0a9626dcac0308900db9abe8b4b e2086d021c2f398c99830853e13e4aa3fe2aa056945e3cf93e2815171854ed44: Process exited with status 1
	stdout:
	8cce8796663c686272b664b73dceacf806cb551effaab61888969c0da2e6f730
	b5a73bcc709e9f29df914100696fb3c096111afd0a6fb981ac56f1ab141c5a03
	85fa58375ff433726090b9dedcf6590989dc66511ec541f2f362e2bf6fdc5963
	7952e86f7d00de30a7c62f095aa7dddb762bb03d6d72802b3435f29862ae0c08
	6c4580ed2d058b492f22258879803533a35fb46da3d06804d95af6208c6696c8
	cfeabc550d344114be4dd61f45c469e70f4aeab71938db624415ee89f098a929
	466037316faafd9c1a59b4f684ba72778a91e1ce50a55abb6b6426e8c362c6ff
	
	stderr:
	E0920 22:11:06.490715    3828 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01f9b917b1e588a93c86cac9e1f354e10e85563b9d77a3cfee355d83f77afbbd\": container with ID starting with 01f9b917b1e588a93c86cac9e1f354e10e85563b9d77a3cfee355d83f77afbbd not found: ID does not exist" containerID="01f9b917b1e588a93c86cac9e1f354e10e85563b9d77a3cfee355d83f77afbbd"
	time="2024-09-20T22:11:06Z" level=fatal msg="stopping the container \"01f9b917b1e588a93c86cac9e1f354e10e85563b9d77a3cfee355d83f77afbbd\": rpc error: code = NotFound desc = could not find container \"01f9b917b1e588a93c86cac9e1f354e10e85563b9d77a3cfee355d83f77afbbd\": container with ID starting with 01f9b917b1e588a93c86cac9e1f354e10e85563b9d77a3cfee355d83f77afbbd not found: ID does not exist"
	I0920 22:11:06.512321   60373 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:11:06.563660   60373 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:11:06.574847   60373 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Sep 20 22:10 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Sep 20 22:10 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Sep 20 22:10 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Sep 20 22:10 /etc/kubernetes/scheduler.conf
	
	I0920 22:11:06.574926   60373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:11:06.585210   60373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:11:06.595545   60373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:11:06.605955   60373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:11:06.606017   60373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:11:06.617266   60373 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:11:06.627481   60373 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:11:06.627535   60373 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:11:06.637886   60373 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:11:06.648215   60373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:11:06.708488   60373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:11:09.933245   61652 start.go:364] duration metric: took 13.160338735s to acquireMachinesLock for "no-preload-032884"
	I0920 22:11:09.933323   61652 start.go:93] Provisioning new machine with config: &{Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:11:09.933440   61652 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 22:11:08.347985   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.348514   61195 main.go:141] libmachine: (old-k8s-version-593412) Found IP for machine: 192.168.61.2
	I0920 22:11:08.348548   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has current primary IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.348557   61195 main.go:141] libmachine: (old-k8s-version-593412) Reserving static IP address...
	I0920 22:11:08.348932   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"} in network mk-old-k8s-version-593412
	I0920 22:11:08.427387   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Getting to WaitForSSH function...
	I0920 22:11:08.427439   61195 main.go:141] libmachine: (old-k8s-version-593412) Reserved static IP address: 192.168.61.2
	I0920 22:11:08.427453   61195 main.go:141] libmachine: (old-k8s-version-593412) Waiting for SSH to be available...
	I0920 22:11:08.430504   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.430876   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:08.430912   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.431099   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH client type: external
	I0920 22:11:08.431138   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa (-rw-------)
	I0920 22:11:08.431177   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:11:08.431202   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | About to run SSH command:
	I0920 22:11:08.431228   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | exit 0
	I0920 22:11:08.563455   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | SSH cmd err, output: <nil>: 
	I0920 22:11:08.563763   61195 main.go:141] libmachine: (old-k8s-version-593412) KVM machine creation complete!
	I0920 22:11:08.564069   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetConfigRaw
	I0920 22:11:08.564718   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:08.564912   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:08.565070   61195 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 22:11:08.565087   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetState
	I0920 22:11:08.566385   61195 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 22:11:08.566398   61195 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 22:11:08.566403   61195 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 22:11:08.566408   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:08.568838   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.569214   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:08.569242   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.569365   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:08.569523   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.569659   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.569810   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:08.569937   61195 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:08.570220   61195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:11:08.570239   61195 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 22:11:08.690857   61195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:11:08.690885   61195 main.go:141] libmachine: Detecting the provisioner...
	I0920 22:11:08.690895   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:08.694148   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.694544   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:08.694569   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.694785   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:08.695034   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.695216   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.695373   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:08.695526   61195 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:08.695702   61195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:11:08.695712   61195 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 22:11:08.820566   61195 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 22:11:08.820684   61195 main.go:141] libmachine: found compatible host: buildroot
	I0920 22:11:08.820699   61195 main.go:141] libmachine: Provisioning with buildroot...
	I0920 22:11:08.820714   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:11:08.820996   61195 buildroot.go:166] provisioning hostname "old-k8s-version-593412"
	I0920 22:11:08.821019   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:11:08.821200   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:08.824131   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.824550   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:08.824578   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.824777   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:08.825017   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.825223   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.825359   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:08.825548   61195 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:08.825808   61195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:11:08.825827   61195 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-593412 && echo "old-k8s-version-593412" | sudo tee /etc/hostname
	I0920 22:11:08.957512   61195 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-593412
	
	I0920 22:11:08.957544   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:08.961239   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.961721   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:08.961751   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.961968   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:08.962163   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.962380   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.962532   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:08.962680   61195 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:08.962841   61195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:11:08.962857   61195 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-593412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-593412/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-593412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:11:09.085061   61195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:11:09.085116   61195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:11:09.085142   61195 buildroot.go:174] setting up certificates
	I0920 22:11:09.085156   61195 provision.go:84] configureAuth start
	I0920 22:11:09.085173   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:11:09.085477   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:11:09.088394   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.088792   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.088829   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.088999   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.091337   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.091652   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.091692   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.091795   61195 provision.go:143] copyHostCerts
	I0920 22:11:09.091850   61195 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:11:09.091872   61195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:11:09.091950   61195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:11:09.092127   61195 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:11:09.092139   61195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:11:09.092172   61195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:11:09.092262   61195 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:11:09.092272   61195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:11:09.092298   61195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:11:09.092375   61195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-593412 san=[127.0.0.1 192.168.61.2 localhost minikube old-k8s-version-593412]
	I0920 22:11:09.241309   61195 provision.go:177] copyRemoteCerts
	I0920 22:11:09.241372   61195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:11:09.241399   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.244141   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.244454   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.244493   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.244643   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:09.244821   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.244994   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:09.245145   61195 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:11:09.336260   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 22:11:09.362480   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 22:11:09.389045   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:11:09.413697   61195 provision.go:87] duration metric: took 328.525516ms to configureAuth
	I0920 22:11:09.413745   61195 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:11:09.413938   61195 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:11:09.414015   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.416615   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.416962   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.417007   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.417150   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:09.417305   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.417482   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.417658   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:09.417835   61195 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:09.418071   61195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:11:09.418092   61195 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:11:09.669740   61195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:11:09.669825   61195 main.go:141] libmachine: Checking connection to Docker...
	I0920 22:11:09.669840   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetURL
	I0920 22:11:09.671417   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using libvirt version 6000000
	I0920 22:11:09.674306   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.674626   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.674655   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.674855   61195 main.go:141] libmachine: Docker is up and running!
	I0920 22:11:09.674872   61195 main.go:141] libmachine: Reticulating splines...
	I0920 22:11:09.674880   61195 client.go:171] duration metric: took 24.009268702s to LocalClient.Create
	I0920 22:11:09.674906   61195 start.go:167] duration metric: took 24.009358408s to libmachine.API.Create "old-k8s-version-593412"
	I0920 22:11:09.674917   61195 start.go:293] postStartSetup for "old-k8s-version-593412" (driver="kvm2")
	I0920 22:11:09.674929   61195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:11:09.674951   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:09.675202   61195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:11:09.675244   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.677490   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.677808   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.677836   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.677987   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:09.678183   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.678361   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:09.678517   61195 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:11:09.765383   61195 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:11:09.769766   61195 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:11:09.769789   61195 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:11:09.769851   61195 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:11:09.769947   61195 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:11:09.770066   61195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:11:09.780182   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:11:09.806621   61195 start.go:296] duration metric: took 131.691184ms for postStartSetup
	I0920 22:11:09.806665   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetConfigRaw
	I0920 22:11:09.807293   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:11:09.810021   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.810446   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.810477   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.810693   61195 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:11:09.810905   61195 start.go:128] duration metric: took 24.170425111s to createHost
	I0920 22:11:09.810926   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.813525   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.813841   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.813877   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.814089   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:09.814262   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.814426   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.814522   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:09.814657   61195 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:09.814884   61195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:11:09.814896   61195 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:11:09.933043   61195 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870269.901927309
	
	I0920 22:11:09.933071   61195 fix.go:216] guest clock: 1726870269.901927309
	I0920 22:11:09.933081   61195 fix.go:229] Guest: 2024-09-20 22:11:09.901927309 +0000 UTC Remote: 2024-09-20 22:11:09.810916219 +0000 UTC m=+38.285344566 (delta=91.01109ms)
	I0920 22:11:09.933137   61195 fix.go:200] guest clock delta is within tolerance: 91.01109ms
	I0920 22:11:09.933149   61195 start.go:83] releasing machines lock for "old-k8s-version-593412", held for 24.292835276s
	I0920 22:11:09.933212   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:09.933524   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:11:09.936810   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.937169   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.937198   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.937400   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:09.937885   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:09.938087   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:09.938196   61195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:11:09.938239   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.938335   61195 ssh_runner.go:195] Run: cat /version.json
	I0920 22:11:09.938360   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.941530   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.941718   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.941960   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.941997   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.942264   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:09.942422   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.942430   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.942483   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.942593   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:09.942662   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:09.942753   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.942914   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:09.942915   61195 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:11:09.943049   61195 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:11:10.028373   61195 ssh_runner.go:195] Run: systemctl --version
	I0920 22:11:10.057467   61195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:11:10.236787   61195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:11:10.246471   61195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:11:10.246548   61195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:11:10.265996   61195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:11:10.266028   61195 start.go:495] detecting cgroup driver to use...
	I0920 22:11:10.266106   61195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:11:10.289030   61195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:11:10.307350   61195 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:11:10.307436   61195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:11:10.324682   61195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:11:10.343455   61195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:11:10.484982   61195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:11:10.652002   61195 docker.go:233] disabling docker service ...
	I0920 22:11:10.652072   61195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:11:10.668019   61195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:11:10.683298   61195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:11:10.840832   61195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:11:10.997730   61195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:11:11.013827   61195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:11:11.037710   61195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 22:11:11.037779   61195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:11:11.050522   61195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:11:11.050607   61195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:11:11.062222   61195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:11:11.072953   61195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:11:11.084283   61195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:11:11.096046   61195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:11:11.109800   61195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:11:11.109875   61195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:11:11.131367   61195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:11:11.145546   61195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:11:11.268716   61195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:11:11.393820   61195 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:11:11.393919   61195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:11:11.399333   61195 start.go:563] Will wait 60s for crictl version
	I0920 22:11:11.399402   61195 ssh_runner.go:195] Run: which crictl
	I0920 22:11:11.404727   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:11:11.461428   61195 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:11:11.461515   61195 ssh_runner.go:195] Run: crio --version
	I0920 22:11:11.492371   61195 ssh_runner.go:195] Run: crio --version
	I0920 22:11:11.526738   61195 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 22:11:11.528094   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:11:11.531895   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:11.533623   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:11.533648   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:11.533878   61195 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 22:11:11.538932   61195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:11:11.553400   61195 kubeadm.go:883] updating cluster {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:11:11.553520   61195 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:11:11.553570   61195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:11:09.935247   61652 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 22:11:09.935518   61652 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:11:09.935557   61652 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:11:09.955314   61652 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44589
	I0920 22:11:09.955937   61652 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:11:09.956571   61652 main.go:141] libmachine: Using API Version  1
	I0920 22:11:09.956596   61652 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:11:09.957066   61652 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:11:09.957322   61652 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:11:09.957464   61652 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:11:09.957629   61652 start.go:159] libmachine.API.Create for "no-preload-032884" (driver="kvm2")
	I0920 22:11:09.957650   61652 client.go:168] LocalClient.Create starting
	I0920 22:11:09.957687   61652 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 22:11:09.957733   61652 main.go:141] libmachine: Decoding PEM data...
	I0920 22:11:09.957755   61652 main.go:141] libmachine: Parsing certificate...
	I0920 22:11:09.957820   61652 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 22:11:09.957844   61652 main.go:141] libmachine: Decoding PEM data...
	I0920 22:11:09.957860   61652 main.go:141] libmachine: Parsing certificate...
	I0920 22:11:09.957884   61652 main.go:141] libmachine: Running pre-create checks...
	I0920 22:11:09.957896   61652 main.go:141] libmachine: (no-preload-032884) Calling .PreCreateCheck
	I0920 22:11:09.958275   61652 main.go:141] libmachine: (no-preload-032884) Calling .GetConfigRaw
	I0920 22:11:09.958751   61652 main.go:141] libmachine: Creating machine...
	I0920 22:11:09.958770   61652 main.go:141] libmachine: (no-preload-032884) Calling .Create
	I0920 22:11:09.958913   61652 main.go:141] libmachine: (no-preload-032884) Creating KVM machine...
	I0920 22:11:09.960260   61652 main.go:141] libmachine: (no-preload-032884) DBG | found existing default KVM network
	I0920 22:11:09.961614   61652 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:11:09.961452   61773 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b2:f2:5e} reservation:<nil>}
	I0920 22:11:09.962484   61652 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:11:09.962392   61773 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9e:f3:10} reservation:<nil>}
	I0920 22:11:09.963675   61652 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:11:09.963575   61773 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ac:94:a1} reservation:<nil>}
	I0920 22:11:09.965103   61652 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:11:09.965026   61773 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000319930}
	I0920 22:11:09.965160   61652 main.go:141] libmachine: (no-preload-032884) DBG | created network xml: 
	I0920 22:11:09.965179   61652 main.go:141] libmachine: (no-preload-032884) DBG | <network>
	I0920 22:11:09.965193   61652 main.go:141] libmachine: (no-preload-032884) DBG |   <name>mk-no-preload-032884</name>
	I0920 22:11:09.965207   61652 main.go:141] libmachine: (no-preload-032884) DBG |   <dns enable='no'/>
	I0920 22:11:09.965219   61652 main.go:141] libmachine: (no-preload-032884) DBG |   
	I0920 22:11:09.965229   61652 main.go:141] libmachine: (no-preload-032884) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0920 22:11:09.965237   61652 main.go:141] libmachine: (no-preload-032884) DBG |     <dhcp>
	I0920 22:11:09.965242   61652 main.go:141] libmachine: (no-preload-032884) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0920 22:11:09.965247   61652 main.go:141] libmachine: (no-preload-032884) DBG |     </dhcp>
	I0920 22:11:09.965256   61652 main.go:141] libmachine: (no-preload-032884) DBG |   </ip>
	I0920 22:11:09.965260   61652 main.go:141] libmachine: (no-preload-032884) DBG |   
	I0920 22:11:09.965274   61652 main.go:141] libmachine: (no-preload-032884) DBG | </network>
	I0920 22:11:09.965288   61652 main.go:141] libmachine: (no-preload-032884) DBG | 
	I0920 22:11:09.971122   61652 main.go:141] libmachine: (no-preload-032884) DBG | trying to create private KVM network mk-no-preload-032884 192.168.72.0/24...
	I0920 22:11:10.055789   61652 main.go:141] libmachine: (no-preload-032884) DBG | private KVM network mk-no-preload-032884 192.168.72.0/24 created
	I0920 22:11:10.055828   61652 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:11:10.055755   61773 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:11:10.055848   61652 main.go:141] libmachine: (no-preload-032884) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884 ...
	I0920 22:11:10.055859   61652 main.go:141] libmachine: (no-preload-032884) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 22:11:10.055892   61652 main.go:141] libmachine: (no-preload-032884) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 22:11:10.325361   61652 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:11:10.325253   61773 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa...
	I0920 22:11:10.619429   61652 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:11:10.619288   61773 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/no-preload-032884.rawdisk...
	I0920 22:11:10.619463   61652 main.go:141] libmachine: (no-preload-032884) DBG | Writing magic tar header
	I0920 22:11:10.619488   61652 main.go:141] libmachine: (no-preload-032884) DBG | Writing SSH key tar header
	I0920 22:11:10.619502   61652 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:11:10.619448   61773 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884 ...
	I0920 22:11:10.619615   61652 main.go:141] libmachine: (no-preload-032884) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884
	I0920 22:11:10.619642   61652 main.go:141] libmachine: (no-preload-032884) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 22:11:10.619656   61652 main.go:141] libmachine: (no-preload-032884) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884 (perms=drwx------)
	I0920 22:11:10.619668   61652 main.go:141] libmachine: (no-preload-032884) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 22:11:10.619679   61652 main.go:141] libmachine: (no-preload-032884) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 22:11:10.619689   61652 main.go:141] libmachine: (no-preload-032884) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:11:10.619703   61652 main.go:141] libmachine: (no-preload-032884) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 22:11:10.619712   61652 main.go:141] libmachine: (no-preload-032884) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 22:11:10.619722   61652 main.go:141] libmachine: (no-preload-032884) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 22:11:10.619731   61652 main.go:141] libmachine: (no-preload-032884) DBG | Checking permissions on dir: /home/jenkins
	I0920 22:11:10.619742   61652 main.go:141] libmachine: (no-preload-032884) DBG | Checking permissions on dir: /home
	I0920 22:11:10.619749   61652 main.go:141] libmachine: (no-preload-032884) DBG | Skipping /home - not owner
	I0920 22:11:10.619762   61652 main.go:141] libmachine: (no-preload-032884) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 22:11:10.619770   61652 main.go:141] libmachine: (no-preload-032884) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 22:11:10.619781   61652 main.go:141] libmachine: (no-preload-032884) Creating domain...
	I0920 22:11:10.621299   61652 main.go:141] libmachine: (no-preload-032884) define libvirt domain using xml: 
	I0920 22:11:10.621325   61652 main.go:141] libmachine: (no-preload-032884) <domain type='kvm'>
	I0920 22:11:10.621336   61652 main.go:141] libmachine: (no-preload-032884)   <name>no-preload-032884</name>
	I0920 22:11:10.621348   61652 main.go:141] libmachine: (no-preload-032884)   <memory unit='MiB'>2200</memory>
	I0920 22:11:10.621357   61652 main.go:141] libmachine: (no-preload-032884)   <vcpu>2</vcpu>
	I0920 22:11:10.621368   61652 main.go:141] libmachine: (no-preload-032884)   <features>
	I0920 22:11:10.621381   61652 main.go:141] libmachine: (no-preload-032884)     <acpi/>
	I0920 22:11:10.621391   61652 main.go:141] libmachine: (no-preload-032884)     <apic/>
	I0920 22:11:10.621401   61652 main.go:141] libmachine: (no-preload-032884)     <pae/>
	I0920 22:11:10.621405   61652 main.go:141] libmachine: (no-preload-032884)     
	I0920 22:11:10.621413   61652 main.go:141] libmachine: (no-preload-032884)   </features>
	I0920 22:11:10.621417   61652 main.go:141] libmachine: (no-preload-032884)   <cpu mode='host-passthrough'>
	I0920 22:11:10.621427   61652 main.go:141] libmachine: (no-preload-032884)   
	I0920 22:11:10.621436   61652 main.go:141] libmachine: (no-preload-032884)   </cpu>
	I0920 22:11:10.621447   61652 main.go:141] libmachine: (no-preload-032884)   <os>
	I0920 22:11:10.621460   61652 main.go:141] libmachine: (no-preload-032884)     <type>hvm</type>
	I0920 22:11:10.621472   61652 main.go:141] libmachine: (no-preload-032884)     <boot dev='cdrom'/>
	I0920 22:11:10.621484   61652 main.go:141] libmachine: (no-preload-032884)     <boot dev='hd'/>
	I0920 22:11:10.621496   61652 main.go:141] libmachine: (no-preload-032884)     <bootmenu enable='no'/>
	I0920 22:11:10.621504   61652 main.go:141] libmachine: (no-preload-032884)   </os>
	I0920 22:11:10.621514   61652 main.go:141] libmachine: (no-preload-032884)   <devices>
	I0920 22:11:10.621521   61652 main.go:141] libmachine: (no-preload-032884)     <disk type='file' device='cdrom'>
	I0920 22:11:10.621537   61652 main.go:141] libmachine: (no-preload-032884)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/boot2docker.iso'/>
	I0920 22:11:10.621559   61652 main.go:141] libmachine: (no-preload-032884)       <target dev='hdc' bus='scsi'/>
	I0920 22:11:10.621570   61652 main.go:141] libmachine: (no-preload-032884)       <readonly/>
	I0920 22:11:10.621577   61652 main.go:141] libmachine: (no-preload-032884)     </disk>
	I0920 22:11:10.621589   61652 main.go:141] libmachine: (no-preload-032884)     <disk type='file' device='disk'>
	I0920 22:11:10.621601   61652 main.go:141] libmachine: (no-preload-032884)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 22:11:10.621617   61652 main.go:141] libmachine: (no-preload-032884)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/no-preload-032884.rawdisk'/>
	I0920 22:11:10.621629   61652 main.go:141] libmachine: (no-preload-032884)       <target dev='hda' bus='virtio'/>
	I0920 22:11:10.621641   61652 main.go:141] libmachine: (no-preload-032884)     </disk>
	I0920 22:11:10.621651   61652 main.go:141] libmachine: (no-preload-032884)     <interface type='network'>
	I0920 22:11:10.621660   61652 main.go:141] libmachine: (no-preload-032884)       <source network='mk-no-preload-032884'/>
	I0920 22:11:10.621667   61652 main.go:141] libmachine: (no-preload-032884)       <model type='virtio'/>
	I0920 22:11:10.621675   61652 main.go:141] libmachine: (no-preload-032884)     </interface>
	I0920 22:11:10.621682   61652 main.go:141] libmachine: (no-preload-032884)     <interface type='network'>
	I0920 22:11:10.621690   61652 main.go:141] libmachine: (no-preload-032884)       <source network='default'/>
	I0920 22:11:10.621701   61652 main.go:141] libmachine: (no-preload-032884)       <model type='virtio'/>
	I0920 22:11:10.621709   61652 main.go:141] libmachine: (no-preload-032884)     </interface>
	I0920 22:11:10.621719   61652 main.go:141] libmachine: (no-preload-032884)     <serial type='pty'>
	I0920 22:11:10.621729   61652 main.go:141] libmachine: (no-preload-032884)       <target port='0'/>
	I0920 22:11:10.621738   61652 main.go:141] libmachine: (no-preload-032884)     </serial>
	I0920 22:11:10.621746   61652 main.go:141] libmachine: (no-preload-032884)     <console type='pty'>
	I0920 22:11:10.621756   61652 main.go:141] libmachine: (no-preload-032884)       <target type='serial' port='0'/>
	I0920 22:11:10.621764   61652 main.go:141] libmachine: (no-preload-032884)     </console>
	I0920 22:11:10.621777   61652 main.go:141] libmachine: (no-preload-032884)     <rng model='virtio'>
	I0920 22:11:10.621790   61652 main.go:141] libmachine: (no-preload-032884)       <backend model='random'>/dev/random</backend>
	I0920 22:11:10.621799   61652 main.go:141] libmachine: (no-preload-032884)     </rng>
	I0920 22:11:10.621807   61652 main.go:141] libmachine: (no-preload-032884)     
	I0920 22:11:10.621815   61652 main.go:141] libmachine: (no-preload-032884)     
	I0920 22:11:10.621827   61652 main.go:141] libmachine: (no-preload-032884)   </devices>
	I0920 22:11:10.621836   61652 main.go:141] libmachine: (no-preload-032884) </domain>
	I0920 22:11:10.621861   61652 main.go:141] libmachine: (no-preload-032884) 
	I0920 22:11:10.627365   61652 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:0e:5d:b7 in network default
	I0920 22:11:10.628018   61652 main.go:141] libmachine: (no-preload-032884) Ensuring networks are active...
	I0920 22:11:10.628038   61652 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:11:10.628874   61652 main.go:141] libmachine: (no-preload-032884) Ensuring network default is active
	I0920 22:11:10.629194   61652 main.go:141] libmachine: (no-preload-032884) Ensuring network mk-no-preload-032884 is active
	I0920 22:11:10.629733   61652 main.go:141] libmachine: (no-preload-032884) Getting domain xml...
	I0920 22:11:10.630461   61652 main.go:141] libmachine: (no-preload-032884) Creating domain...
	I0920 22:11:07.550795   60373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:11:07.785345   60373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:11:07.853932   60373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:11:07.932598   60373 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:11:07.932696   60373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:11:08.432872   60373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:11:08.933053   60373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:11:08.952628   60373 api_server.go:72] duration metric: took 1.020028765s to wait for apiserver process to appear ...
	I0920 22:11:08.952655   60373 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:11:08.952673   60373 api_server.go:253] Checking apiserver healthz at https://192.168.50.192:8443/healthz ...
	I0920 22:11:11.288474   60373 api_server.go:279] https://192.168.50.192:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:11:11.288506   60373 api_server.go:103] status: https://192.168.50.192:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:11:11.288534   60373 api_server.go:253] Checking apiserver healthz at https://192.168.50.192:8443/healthz ...
	I0920 22:11:11.334911   60373 api_server.go:279] https://192.168.50.192:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:11:11.334940   60373 api_server.go:103] status: https://192.168.50.192:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:11:11.453313   60373 api_server.go:253] Checking apiserver healthz at https://192.168.50.192:8443/healthz ...
	I0920 22:11:11.462397   60373 api_server.go:279] https://192.168.50.192:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:11:11.462433   60373 api_server.go:103] status: https://192.168.50.192:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:11:11.953060   60373 api_server.go:253] Checking apiserver healthz at https://192.168.50.192:8443/healthz ...
	I0920 22:11:11.960951   60373 api_server.go:279] https://192.168.50.192:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:11:11.960999   60373 api_server.go:103] status: https://192.168.50.192:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:11:12.453440   60373 api_server.go:253] Checking apiserver healthz at https://192.168.50.192:8443/healthz ...
	I0920 22:11:12.471684   60373 api_server.go:279] https://192.168.50.192:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:11:12.471714   60373 api_server.go:103] status: https://192.168.50.192:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:11:12.952803   60373 api_server.go:253] Checking apiserver healthz at https://192.168.50.192:8443/healthz ...
	I0920 22:11:12.960861   60373 api_server.go:279] https://192.168.50.192:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:11:12.960894   60373 api_server.go:103] status: https://192.168.50.192:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:11:13.453585   60373 api_server.go:253] Checking apiserver healthz at https://192.168.50.192:8443/healthz ...
	I0920 22:11:13.461210   60373 api_server.go:279] https://192.168.50.192:8443/healthz returned 200:
	ok
	I0920 22:11:13.473371   60373 api_server.go:141] control plane version: v1.31.1
	I0920 22:11:13.473413   60373 api_server.go:131] duration metric: took 4.520750442s to wait for apiserver health ...
	I0920 22:11:13.473425   60373 cni.go:84] Creating CNI manager for ""
	I0920 22:11:13.473433   60373 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:11:13.477677   60373 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:11:13.479290   60373 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:11:13.491893   60373 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:11:13.514106   60373 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:11:13.514218   60373 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0920 22:11:13.514241   60373 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0920 22:11:13.531943   60373 system_pods.go:59] 8 kube-system pods found
	I0920 22:11:13.531975   60373 system_pods.go:61] "coredns-7c65d6cfc9-6qst7" [562afc6d-61e7-4c7b-9d09-d46cddf8bd6b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:11:13.531982   60373 system_pods.go:61] "coredns-7c65d6cfc9-77zwp" [c83a3341-7159-4f94-b624-79450d4689ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:11:13.532019   60373 system_pods.go:61] "etcd-kubernetes-upgrade-696728" [cc5b0a87-a48d-4072-b5f7-dc6436f37724] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:11:13.532038   60373 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-696728" [7f6cc8e7-38ce-4e37-964c-a81a57fd6e56] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:11:13.532056   60373 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-696728" [314a29ef-3748-42d4-8091-e3c29c78b257] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:11:13.532063   60373 system_pods.go:61] "kube-proxy-vkhgq" [9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b] Running
	I0920 22:11:13.532068   60373 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-696728" [4cfc3267-921b-44d8-8647-0dd022d69d7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:11:13.532072   60373 system_pods.go:61] "storage-provisioner" [815b34aa-872f-46eb-9c27-64452e0d49f9] Running
	I0920 22:11:13.532078   60373 system_pods.go:74] duration metric: took 17.947928ms to wait for pod list to return data ...
	I0920 22:11:13.532119   60373 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:11:13.536861   60373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:11:13.536898   60373 node_conditions.go:123] node cpu capacity is 2
	I0920 22:11:13.536912   60373 node_conditions.go:105] duration metric: took 4.785069ms to run NodePressure ...
	I0920 22:11:13.536928   60373 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:11:13.885746   60373 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:11:13.904258   60373 ops.go:34] apiserver oom_adj: -16
	I0920 22:11:13.904280   60373 kubeadm.go:597] duration metric: took 22.291987801s to restartPrimaryControlPlane
	I0920 22:11:13.904296   60373 kubeadm.go:394] duration metric: took 22.407131019s to StartCluster
	I0920 22:11:13.904315   60373 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:13.904396   60373 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:11:13.905655   60373 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:13.905930   60373 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.192 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:11:13.906148   60373 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:11:13.906256   60373 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-696728"
	I0920 22:11:13.906283   60373 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-696728"
	W0920 22:11:13.906292   60373 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:11:13.906298   60373 config.go:182] Loaded profile config "kubernetes-upgrade-696728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:11:13.906333   60373 host.go:66] Checking if "kubernetes-upgrade-696728" exists ...
	I0920 22:11:13.906353   60373 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-696728"
	I0920 22:11:13.906391   60373 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-696728"
	I0920 22:11:13.906816   60373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:11:13.906846   60373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:11:13.906870   60373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:11:13.906901   60373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:11:13.908669   60373 out.go:177] * Verifying Kubernetes components...
	I0920 22:11:13.910057   60373 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:11:13.927546   60373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45071
	I0920 22:11:13.928149   60373 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:11:13.928273   60373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33213
	I0920 22:11:13.928724   60373 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:11:13.928899   60373 main.go:141] libmachine: Using API Version  1
	I0920 22:11:13.928919   60373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:11:13.929315   60373 main.go:141] libmachine: Using API Version  1
	I0920 22:11:13.929340   60373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:11:13.929408   60373 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:11:13.929591   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetState
	I0920 22:11:13.929681   60373 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:11:13.930448   60373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:11:13.930485   60373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:11:13.932878   60373 kapi.go:59] client config for kubernetes-upgrade-696728: &rest.Config{Host:"https://192.168.50.192:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.crt", KeyFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.key", CAFile:"/home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f67ea0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0920 22:11:13.933344   60373 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-696728"
	W0920 22:11:13.933370   60373 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:11:13.933401   60373 host.go:66] Checking if "kubernetes-upgrade-696728" exists ...
	I0920 22:11:13.933811   60373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:11:13.933850   60373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:11:13.955557   60373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37151
	I0920 22:11:13.956202   60373 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:11:13.956831   60373 main.go:141] libmachine: Using API Version  1
	I0920 22:11:13.956850   60373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:11:13.957258   60373 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:11:13.957846   60373 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:11:13.957895   60373 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:11:13.958166   60373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46035
	I0920 22:11:13.958519   60373 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:11:13.959150   60373 main.go:141] libmachine: Using API Version  1
	I0920 22:11:13.959174   60373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:11:13.959564   60373 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:11:13.959771   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetState
	I0920 22:11:13.961714   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .DriverName
	I0920 22:11:13.963852   60373 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:11:13.965085   60373 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:11:13.965105   60373 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:11:13.965124   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHHostname
	I0920 22:11:13.968553   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:11:13.969006   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:10:02 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:11:13.969024   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:11:13.969196   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHPort
	I0920 22:11:13.969386   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:11:13.969512   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHUsername
	I0920 22:11:13.969660   60373 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728/id_rsa Username:docker}
	I0920 22:11:13.978728   60373 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37361
	I0920 22:11:13.979343   60373 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:11:13.980092   60373 main.go:141] libmachine: Using API Version  1
	I0920 22:11:13.980111   60373 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:11:13.980633   60373 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:11:13.980856   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetState
	I0920 22:11:13.982908   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .DriverName
	I0920 22:11:13.983160   60373 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:11:13.983177   60373 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:11:13.983195   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHHostname
	I0920 22:11:13.986933   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:11:13.997323   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c0:af:5b", ip: ""} in network mk-kubernetes-upgrade-696728: {Iface:virbr2 ExpiryTime:2024-09-20 23:10:02 +0000 UTC Type:0 Mac:52:54:00:c0:af:5b Iaid: IPaddr:192.168.50.192 Prefix:24 Hostname:kubernetes-upgrade-696728 Clientid:01:52:54:00:c0:af:5b}
	I0920 22:11:13.997357   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | domain kubernetes-upgrade-696728 has defined IP address 192.168.50.192 and MAC address 52:54:00:c0:af:5b in network mk-kubernetes-upgrade-696728
	I0920 22:11:13.997379   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHPort
	I0920 22:11:13.999237   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHKeyPath
	I0920 22:11:13.999454   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .GetSSHUsername
	I0920 22:11:13.999615   60373 sshutil.go:53] new ssh client: &{IP:192.168.50.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/kubernetes-upgrade-696728/id_rsa Username:docker}
	I0920 22:11:14.167071   60373 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:11:14.187787   60373 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:11:14.187847   60373 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:11:14.210180   60373 api_server.go:72] duration metric: took 304.215919ms to wait for apiserver process to appear ...
	I0920 22:11:14.210217   60373 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:11:14.210243   60373 api_server.go:253] Checking apiserver healthz at https://192.168.50.192:8443/healthz ...
	I0920 22:11:14.216786   60373 api_server.go:279] https://192.168.50.192:8443/healthz returned 200:
	ok
	I0920 22:11:14.217886   60373 api_server.go:141] control plane version: v1.31.1
	I0920 22:11:14.217907   60373 api_server.go:131] duration metric: took 7.682767ms to wait for apiserver health ...
	I0920 22:11:14.217915   60373 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:11:14.225377   60373 system_pods.go:59] 8 kube-system pods found
	I0920 22:11:14.225414   60373 system_pods.go:61] "coredns-7c65d6cfc9-6qst7" [562afc6d-61e7-4c7b-9d09-d46cddf8bd6b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:11:14.225425   60373 system_pods.go:61] "coredns-7c65d6cfc9-77zwp" [c83a3341-7159-4f94-b624-79450d4689ca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:11:14.225438   60373 system_pods.go:61] "etcd-kubernetes-upgrade-696728" [cc5b0a87-a48d-4072-b5f7-dc6436f37724] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:11:14.225450   60373 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-696728" [7f6cc8e7-38ce-4e37-964c-a81a57fd6e56] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:11:14.225461   60373 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-696728" [314a29ef-3748-42d4-8091-e3c29c78b257] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:11:14.225471   60373 system_pods.go:61] "kube-proxy-vkhgq" [9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b] Running
	I0920 22:11:14.225479   60373 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-696728" [4cfc3267-921b-44d8-8647-0dd022d69d7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:11:14.225484   60373 system_pods.go:61] "storage-provisioner" [815b34aa-872f-46eb-9c27-64452e0d49f9] Running
	I0920 22:11:14.225495   60373 system_pods.go:74] duration metric: took 7.574188ms to wait for pod list to return data ...
	I0920 22:11:14.225525   60373 kubeadm.go:582] duration metric: took 319.552203ms to wait for: map[apiserver:true system_pods:true]
	I0920 22:11:14.225543   60373 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:11:14.229096   60373 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:11:14.229120   60373 node_conditions.go:123] node cpu capacity is 2
	I0920 22:11:14.229128   60373 node_conditions.go:105] duration metric: took 3.580212ms to run NodePressure ...
	I0920 22:11:14.229138   60373 start.go:241] waiting for startup goroutines ...
	I0920 22:11:14.341447   60373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:11:14.350697   60373 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:11:14.574519   60373 main.go:141] libmachine: Making call to close driver server
	I0920 22:11:14.574562   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .Close
	I0920 22:11:14.575009   60373 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:11:14.575074   60373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:11:14.575102   60373 main.go:141] libmachine: Making call to close driver server
	I0920 22:11:14.575120   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .Close
	I0920 22:11:14.575025   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Closing plugin on server side
	I0920 22:11:14.575511   60373 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:11:14.575551   60373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:11:14.575552   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Closing plugin on server side
	I0920 22:11:14.595499   60373 main.go:141] libmachine: Making call to close driver server
	I0920 22:11:14.595545   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .Close
	I0920 22:11:14.597693   60373 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:11:14.597719   60373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:11:14.597697   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Closing plugin on server side
	I0920 22:11:15.340621   60373 main.go:141] libmachine: Making call to close driver server
	I0920 22:11:15.340654   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .Close
	I0920 22:11:15.341091   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Closing plugin on server side
	I0920 22:11:15.341132   60373 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:11:15.341155   60373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:11:15.341173   60373 main.go:141] libmachine: Making call to close driver server
	I0920 22:11:15.341187   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) Calling .Close
	I0920 22:11:15.341474   60373 main.go:141] libmachine: (kubernetes-upgrade-696728) DBG | Closing plugin on server side
	I0920 22:11:15.341511   60373 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:11:15.341519   60373 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:11:15.343508   60373 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0920 22:11:15.344745   60373 addons.go:510] duration metric: took 1.438598746s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0920 22:11:15.344792   60373 start.go:246] waiting for cluster config update ...
	I0920 22:11:15.344806   60373 start.go:255] writing updated cluster config ...
	I0920 22:11:15.345122   60373 ssh_runner.go:195] Run: rm -f paused
	I0920 22:11:15.411283   60373 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:11:15.413458   60373 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-696728" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.285243545Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726870276284812704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6dc8856-75be-456e-92bb-b058d3498af9 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.287656096Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5a2675a0-bb48-4455-9edb-a86ed3b3c0ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.287738732Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5a2675a0-bb48-4455-9edb-a86ed3b3c0ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.288126138Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:64c83541283fd23d252e798602e62a310d6f29db5384200873f4f104c28d8338,PodSandboxId:47525047ca9d0f68d9976e9fd7d50bdcf16391e611f780425bce99fbaea48b73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870272237751675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-77zwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83a3341-7159-4f94-b624-79450d4689ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96918cb88645b3a8412c0264388fafbeafcb627023d6b2d58177a649a4f12b1,PodSandboxId:8e2595df90c0d35b6bb6c0724d39bf19c6d5ad1345fd4c8a02c81d9f408ea836,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870272282912863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qst7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 562afc6d-61e7-4c7b-9d09-d46cddf8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55a9dc7bcbe93b478af66cf86fb1d78f2bb4d3a0d2edac3a195875c39829b15c,PodSandboxId:1d34f4a6b119f5c67e32503b9dc230f2395d48c79dbe1a77a2d2f774284047d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726870272294046084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815b34aa-872f-46eb-9c27-64452e0d49f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2393388229dba5d9cdd1ae7bddb4efd8ca83c6f5c1fe117e79b8ea77f00ac027,PodSandboxId:3bd607b946b731b7347003c36efd2d3e70b110d8c9669b3f670b92969810a741,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,Created
At:1726870268425663427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d43f3f0af04e5bdc95cdac4f29fa23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af81e0de77b145f5ea62640f88f8f43b6c0213061e813054254d0c60e8d7fe99,PodSandboxId:5ff61e5bb5591b5a43479febb5c5ef973c84e6dfb9b0c2138c3dc812393363f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726870268456723171,La
bels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3517a75f3122835c1d435bd9e999000b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44c918fbf3e851f4171bbf499a60a4705a06a5fb774a75e8c60676727f40ea8,PodSandboxId:ae1bf2c9613f4c013ffb62a95a59ef0d7f323efa1b92b7ab41c3289acc0d5315,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172687026840425303
6,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c842dca4bf53881977511ec68792aaf,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ecd6ed8c863ad084d9b83f9d418f90ef130f21a341361dcccc062a4f1165174,PodSandboxId:c5293d5b5c4ac0eaba2726293ff65fa1aab95f319e3e7b83710827f5f03e6927,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:17268
70268395372961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dcee247ff9f02f69d6ff49af9b2d0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d16963765df8e63f6d6072fc881cf087d63f496683f6bdf7f60c48b7a4f5cb58,PodSandboxId:1d34f4a6b119f5c67e32503b9dc230f2395d48c79dbe1a77a2d2f774284047d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:172687
0266218357274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815b34aa-872f-46eb-9c27-64452e0d49f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68a260c5ac88ec4be65f2c97ca10f74764283af39c29b131ca92007bcf9cef2,PodSandboxId:57e945265a417a39e97b25a7be7ba203b8f2eaf32eff81142d3d76ba28bd8fe5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726870264223497719,Labels:
map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkhgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cce8796663c686272b664b73dceacf806cb551effaab61888969c0da2e6f730,PodSandboxId:47525047ca9d0f68d9976e9fd7d50bdcf16391e611f780425bce99fbaea48b73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726870251082314289,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-77zwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83a3341-7159-4f94-b624-79450d4689ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a73bcc709e9f29df914100696fb3c096111afd0a6fb981ac56f1ab141c5a03,PodSandboxId:8e2595df90c0d35b6bb6c0724d39bf19c6d5ad1345fd4c8a02c81d9f408ea836,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726870250914362559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qst7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562afc6d-61e7-4c7b-9d09-d46cddf8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466037316faafd9c1a59b4f684ba72778a91e1ce50a55abb6b6426e8c362c6ff,PodSandboxId:09803dbed6e65a791d5ddfe7f0545f145778d71117aa1070b518116471c
41e33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726870247218068461,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkhgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85fa58375ff433726090b9dedcf6590989dc66511ec541f2f362e2bf6fdc5963,PodSandboxId:6948040c4d5baa1e1107489d4735266730655e226622373b1763ff97d12d557f,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726870247521145040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d43f3f0af04e5bdc95cdac4f29fa23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c4580ed2d058b492f22258879803533a35fb46da3d06804d95af6208c6696c8,PodSandboxId:86aaf4b3ebd36afde6bf41b63dd1d1b5d4872fa36b60d7b63a5f7b187df42415,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726870247313190146,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c842dca4bf53881977511ec68792aaf,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7952e86f7d00de30a7c62f095aa7dddb762bb03d6d72802b3435f29862ae0c08,PodSandboxId:8551bc4a85d68e5d20c33220c66194bc025d38bb764886df0d22038043017904,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870247319596415,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3517a75f3122835c1d435bd9e999000b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfeabc550d344114be4dd61f45c469e70f4aeab71938db624415ee89f098a929,PodSandboxId:9b828584ace547f22eb531d2f4bb683d0fe154f7fe39679c6d5efa6daba2bedf,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726870247290471919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dcee247ff9f02f69d6ff49af9b2d0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5a2675a0-bb48-4455-9edb-a86ed3b3c0ac name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.348650830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9c2ce3bf-977c-4b55-91e4-0679ea6bd923 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.348754314Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9c2ce3bf-977c-4b55-91e4-0679ea6bd923 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.350208447Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45eab06a-4966-46e5-94d4-1b36521d105c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.350695555Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726870276350657560,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45eab06a-4966-46e5-94d4-1b36521d105c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.351259907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c40b66b-70c3-415f-9917-a221793146f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.351330289Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c40b66b-70c3-415f-9917-a221793146f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.351669857Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:64c83541283fd23d252e798602e62a310d6f29db5384200873f4f104c28d8338,PodSandboxId:47525047ca9d0f68d9976e9fd7d50bdcf16391e611f780425bce99fbaea48b73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870272237751675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-77zwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83a3341-7159-4f94-b624-79450d4689ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96918cb88645b3a8412c0264388fafbeafcb627023d6b2d58177a649a4f12b1,PodSandboxId:8e2595df90c0d35b6bb6c0724d39bf19c6d5ad1345fd4c8a02c81d9f408ea836,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870272282912863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qst7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 562afc6d-61e7-4c7b-9d09-d46cddf8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55a9dc7bcbe93b478af66cf86fb1d78f2bb4d3a0d2edac3a195875c39829b15c,PodSandboxId:1d34f4a6b119f5c67e32503b9dc230f2395d48c79dbe1a77a2d2f774284047d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726870272294046084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815b34aa-872f-46eb-9c27-64452e0d49f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2393388229dba5d9cdd1ae7bddb4efd8ca83c6f5c1fe117e79b8ea77f00ac027,PodSandboxId:3bd607b946b731b7347003c36efd2d3e70b110d8c9669b3f670b92969810a741,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,Created
At:1726870268425663427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d43f3f0af04e5bdc95cdac4f29fa23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af81e0de77b145f5ea62640f88f8f43b6c0213061e813054254d0c60e8d7fe99,PodSandboxId:5ff61e5bb5591b5a43479febb5c5ef973c84e6dfb9b0c2138c3dc812393363f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726870268456723171,La
bels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3517a75f3122835c1d435bd9e999000b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44c918fbf3e851f4171bbf499a60a4705a06a5fb774a75e8c60676727f40ea8,PodSandboxId:ae1bf2c9613f4c013ffb62a95a59ef0d7f323efa1b92b7ab41c3289acc0d5315,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172687026840425303
6,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c842dca4bf53881977511ec68792aaf,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ecd6ed8c863ad084d9b83f9d418f90ef130f21a341361dcccc062a4f1165174,PodSandboxId:c5293d5b5c4ac0eaba2726293ff65fa1aab95f319e3e7b83710827f5f03e6927,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:17268
70268395372961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dcee247ff9f02f69d6ff49af9b2d0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d16963765df8e63f6d6072fc881cf087d63f496683f6bdf7f60c48b7a4f5cb58,PodSandboxId:1d34f4a6b119f5c67e32503b9dc230f2395d48c79dbe1a77a2d2f774284047d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:172687
0266218357274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815b34aa-872f-46eb-9c27-64452e0d49f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68a260c5ac88ec4be65f2c97ca10f74764283af39c29b131ca92007bcf9cef2,PodSandboxId:57e945265a417a39e97b25a7be7ba203b8f2eaf32eff81142d3d76ba28bd8fe5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726870264223497719,Labels:
map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkhgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cce8796663c686272b664b73dceacf806cb551effaab61888969c0da2e6f730,PodSandboxId:47525047ca9d0f68d9976e9fd7d50bdcf16391e611f780425bce99fbaea48b73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726870251082314289,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-77zwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83a3341-7159-4f94-b624-79450d4689ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a73bcc709e9f29df914100696fb3c096111afd0a6fb981ac56f1ab141c5a03,PodSandboxId:8e2595df90c0d35b6bb6c0724d39bf19c6d5ad1345fd4c8a02c81d9f408ea836,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726870250914362559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qst7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562afc6d-61e7-4c7b-9d09-d46cddf8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466037316faafd9c1a59b4f684ba72778a91e1ce50a55abb6b6426e8c362c6ff,PodSandboxId:09803dbed6e65a791d5ddfe7f0545f145778d71117aa1070b518116471c
41e33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726870247218068461,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkhgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85fa58375ff433726090b9dedcf6590989dc66511ec541f2f362e2bf6fdc5963,PodSandboxId:6948040c4d5baa1e1107489d4735266730655e226622373b1763ff97d12d557f,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726870247521145040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d43f3f0af04e5bdc95cdac4f29fa23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c4580ed2d058b492f22258879803533a35fb46da3d06804d95af6208c6696c8,PodSandboxId:86aaf4b3ebd36afde6bf41b63dd1d1b5d4872fa36b60d7b63a5f7b187df42415,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726870247313190146,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c842dca4bf53881977511ec68792aaf,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7952e86f7d00de30a7c62f095aa7dddb762bb03d6d72802b3435f29862ae0c08,PodSandboxId:8551bc4a85d68e5d20c33220c66194bc025d38bb764886df0d22038043017904,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870247319596415,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3517a75f3122835c1d435bd9e999000b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfeabc550d344114be4dd61f45c469e70f4aeab71938db624415ee89f098a929,PodSandboxId:9b828584ace547f22eb531d2f4bb683d0fe154f7fe39679c6d5efa6daba2bedf,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726870247290471919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dcee247ff9f02f69d6ff49af9b2d0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c40b66b-70c3-415f-9917-a221793146f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.402171870Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7d91c70e-4861-4526-a962-15761128a7c3 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.402299719Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7d91c70e-4861-4526-a962-15761128a7c3 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.403788998Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=60c81d80-2b2f-4643-913c-5feaf15252c8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.404496432Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726870276404461451,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=60c81d80-2b2f-4643-913c-5feaf15252c8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.405549872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a44353c-74f9-49e6-94d3-48a5039af4c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.405650294Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a44353c-74f9-49e6-94d3-48a5039af4c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.406257696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:64c83541283fd23d252e798602e62a310d6f29db5384200873f4f104c28d8338,PodSandboxId:47525047ca9d0f68d9976e9fd7d50bdcf16391e611f780425bce99fbaea48b73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870272237751675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-77zwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83a3341-7159-4f94-b624-79450d4689ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96918cb88645b3a8412c0264388fafbeafcb627023d6b2d58177a649a4f12b1,PodSandboxId:8e2595df90c0d35b6bb6c0724d39bf19c6d5ad1345fd4c8a02c81d9f408ea836,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870272282912863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qst7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 562afc6d-61e7-4c7b-9d09-d46cddf8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55a9dc7bcbe93b478af66cf86fb1d78f2bb4d3a0d2edac3a195875c39829b15c,PodSandboxId:1d34f4a6b119f5c67e32503b9dc230f2395d48c79dbe1a77a2d2f774284047d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726870272294046084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815b34aa-872f-46eb-9c27-64452e0d49f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2393388229dba5d9cdd1ae7bddb4efd8ca83c6f5c1fe117e79b8ea77f00ac027,PodSandboxId:3bd607b946b731b7347003c36efd2d3e70b110d8c9669b3f670b92969810a741,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,Created
At:1726870268425663427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d43f3f0af04e5bdc95cdac4f29fa23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af81e0de77b145f5ea62640f88f8f43b6c0213061e813054254d0c60e8d7fe99,PodSandboxId:5ff61e5bb5591b5a43479febb5c5ef973c84e6dfb9b0c2138c3dc812393363f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726870268456723171,La
bels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3517a75f3122835c1d435bd9e999000b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44c918fbf3e851f4171bbf499a60a4705a06a5fb774a75e8c60676727f40ea8,PodSandboxId:ae1bf2c9613f4c013ffb62a95a59ef0d7f323efa1b92b7ab41c3289acc0d5315,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172687026840425303
6,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c842dca4bf53881977511ec68792aaf,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ecd6ed8c863ad084d9b83f9d418f90ef130f21a341361dcccc062a4f1165174,PodSandboxId:c5293d5b5c4ac0eaba2726293ff65fa1aab95f319e3e7b83710827f5f03e6927,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:17268
70268395372961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dcee247ff9f02f69d6ff49af9b2d0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d16963765df8e63f6d6072fc881cf087d63f496683f6bdf7f60c48b7a4f5cb58,PodSandboxId:1d34f4a6b119f5c67e32503b9dc230f2395d48c79dbe1a77a2d2f774284047d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:172687
0266218357274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815b34aa-872f-46eb-9c27-64452e0d49f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68a260c5ac88ec4be65f2c97ca10f74764283af39c29b131ca92007bcf9cef2,PodSandboxId:57e945265a417a39e97b25a7be7ba203b8f2eaf32eff81142d3d76ba28bd8fe5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726870264223497719,Labels:
map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkhgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cce8796663c686272b664b73dceacf806cb551effaab61888969c0da2e6f730,PodSandboxId:47525047ca9d0f68d9976e9fd7d50bdcf16391e611f780425bce99fbaea48b73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726870251082314289,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-77zwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83a3341-7159-4f94-b624-79450d4689ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a73bcc709e9f29df914100696fb3c096111afd0a6fb981ac56f1ab141c5a03,PodSandboxId:8e2595df90c0d35b6bb6c0724d39bf19c6d5ad1345fd4c8a02c81d9f408ea836,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726870250914362559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qst7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562afc6d-61e7-4c7b-9d09-d46cddf8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466037316faafd9c1a59b4f684ba72778a91e1ce50a55abb6b6426e8c362c6ff,PodSandboxId:09803dbed6e65a791d5ddfe7f0545f145778d71117aa1070b518116471c
41e33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726870247218068461,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkhgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85fa58375ff433726090b9dedcf6590989dc66511ec541f2f362e2bf6fdc5963,PodSandboxId:6948040c4d5baa1e1107489d4735266730655e226622373b1763ff97d12d557f,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726870247521145040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d43f3f0af04e5bdc95cdac4f29fa23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c4580ed2d058b492f22258879803533a35fb46da3d06804d95af6208c6696c8,PodSandboxId:86aaf4b3ebd36afde6bf41b63dd1d1b5d4872fa36b60d7b63a5f7b187df42415,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726870247313190146,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c842dca4bf53881977511ec68792aaf,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7952e86f7d00de30a7c62f095aa7dddb762bb03d6d72802b3435f29862ae0c08,PodSandboxId:8551bc4a85d68e5d20c33220c66194bc025d38bb764886df0d22038043017904,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870247319596415,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3517a75f3122835c1d435bd9e999000b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfeabc550d344114be4dd61f45c469e70f4aeab71938db624415ee89f098a929,PodSandboxId:9b828584ace547f22eb531d2f4bb683d0fe154f7fe39679c6d5efa6daba2bedf,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726870247290471919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dcee247ff9f02f69d6ff49af9b2d0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a44353c-74f9-49e6-94d3-48a5039af4c9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.458758232Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=307ff8b2-687f-4a57-84e4-a23d52f40deb name=/runtime.v1.RuntimeService/Version
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.458954608Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=307ff8b2-687f-4a57-84e4-a23d52f40deb name=/runtime.v1.RuntimeService/Version
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.462523166Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b3874c3f-01ff-4b7e-93b5-d69b841be712 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.464158478Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726870276464050424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b3874c3f-01ff-4b7e-93b5-d69b841be712 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.465634237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8dcc499f-d9f7-46fc-a240-4ca96aef413c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.465731333Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8dcc499f-d9f7-46fc-a240-4ca96aef413c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:11:16 kubernetes-upgrade-696728 crio[3000]: time="2024-09-20 22:11:16.466653597Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:64c83541283fd23d252e798602e62a310d6f29db5384200873f4f104c28d8338,PodSandboxId:47525047ca9d0f68d9976e9fd7d50bdcf16391e611f780425bce99fbaea48b73,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870272237751675,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-77zwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83a3341-7159-4f94-b624-79450d4689ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d96918cb88645b3a8412c0264388fafbeafcb627023d6b2d58177a649a4f12b1,PodSandboxId:8e2595df90c0d35b6bb6c0724d39bf19c6d5ad1345fd4c8a02c81d9f408ea836,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870272282912863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qst7,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 562afc6d-61e7-4c7b-9d09-d46cddf8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55a9dc7bcbe93b478af66cf86fb1d78f2bb4d3a0d2edac3a195875c39829b15c,PodSandboxId:1d34f4a6b119f5c67e32503b9dc230f2395d48c79dbe1a77a2d2f774284047d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1726870272294046084,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815b34aa-872f-46eb-9c27-64452e0d49f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2393388229dba5d9cdd1ae7bddb4efd8ca83c6f5c1fe117e79b8ea77f00ac027,PodSandboxId:3bd607b946b731b7347003c36efd2d3e70b110d8c9669b3f670b92969810a741,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,Created
At:1726870268425663427,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d43f3f0af04e5bdc95cdac4f29fa23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af81e0de77b145f5ea62640f88f8f43b6c0213061e813054254d0c60e8d7fe99,PodSandboxId:5ff61e5bb5591b5a43479febb5c5ef973c84e6dfb9b0c2138c3dc812393363f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726870268456723171,La
bels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3517a75f3122835c1d435bd9e999000b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e44c918fbf3e851f4171bbf499a60a4705a06a5fb774a75e8c60676727f40ea8,PodSandboxId:ae1bf2c9613f4c013ffb62a95a59ef0d7f323efa1b92b7ab41c3289acc0d5315,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:172687026840425303
6,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c842dca4bf53881977511ec68792aaf,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ecd6ed8c863ad084d9b83f9d418f90ef130f21a341361dcccc062a4f1165174,PodSandboxId:c5293d5b5c4ac0eaba2726293ff65fa1aab95f319e3e7b83710827f5f03e6927,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:17268
70268395372961,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dcee247ff9f02f69d6ff49af9b2d0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d16963765df8e63f6d6072fc881cf087d63f496683f6bdf7f60c48b7a4f5cb58,PodSandboxId:1d34f4a6b119f5c67e32503b9dc230f2395d48c79dbe1a77a2d2f774284047d1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:172687
0266218357274,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 815b34aa-872f-46eb-9c27-64452e0d49f9,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d68a260c5ac88ec4be65f2c97ca10f74764283af39c29b131ca92007bcf9cef2,PodSandboxId:57e945265a417a39e97b25a7be7ba203b8f2eaf32eff81142d3d76ba28bd8fe5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726870264223497719,Labels:
map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkhgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cce8796663c686272b664b73dceacf806cb551effaab61888969c0da2e6f730,PodSandboxId:47525047ca9d0f68d9976e9fd7d50bdcf16391e611f780425bce99fbaea48b73,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726870251082314289,Labels:map[string]string{io.kubernetes.conta
iner.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-77zwp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c83a3341-7159-4f94-b624-79450d4689ca,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b5a73bcc709e9f29df914100696fb3c096111afd0a6fb981ac56f1ab141c5a03,PodSandboxId:8e2595df90c0d35b6bb6c0724d39bf19c6d5ad1345fd4c8a02c81d9f408ea836,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedIm
age:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1726870250914362559,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qst7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 562afc6d-61e7-4c7b-9d09-d46cddf8bd6b,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:466037316faafd9c1a59b4f684ba72778a91e1ce50a55abb6b6426e8c362c6ff,PodSandboxId:09803dbed6e65a791d5ddfe7f0545f145778d71117aa1070b518116471c
41e33,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1726870247218068461,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vkhgq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85fa58375ff433726090b9dedcf6590989dc66511ec541f2f362e2bf6fdc5963,PodSandboxId:6948040c4d5baa1e1107489d4735266730655e226622373b1763ff97d12d557f,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1726870247521145040,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27d43f3f0af04e5bdc95cdac4f29fa23,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6c4580ed2d058b492f22258879803533a35fb46da3d06804d95af6208c6696c8,PodSandboxId:86aaf4b3ebd36afde6bf41b63dd1d1b5d4872fa36b60d7b63a5f7b187df42415,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1726870247313190146,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c842dca4bf53881977511ec68792aaf,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7952e86f7d00de30a7c62f095aa7dddb762bb03d6d72802b3435f29862ae0c08,PodSandboxId:8551bc4a85d68e5d20c33220c66194bc025d38bb764886df0d22038043017904,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870247319596415,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3517a75f3122835c1d435bd9e999000b,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cfeabc550d344114be4dd61f45c469e70f4aeab71938db624415ee89f098a929,PodSandboxId:9b828584ace547f22eb531d2f4bb683d0fe154f7fe39679c6d5efa6daba2bedf,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1726870247290471919,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-696728,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dcee247ff9f02f69d6ff49af9b2d0f7,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8dcc499f-d9f7-46fc-a240-4ca96aef413c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	55a9dc7bcbe93       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       3                   1d34f4a6b119f       storage-provisioner
	d96918cb88645       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   4 seconds ago       Running             coredns                   2                   8e2595df90c0d       coredns-7c65d6cfc9-6qst7
	64c83541283fd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   4 seconds ago       Running             coredns                   2                   47525047ca9d0       coredns-7c65d6cfc9-77zwp
	af81e0de77b14       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   8 seconds ago       Running             kube-apiserver            2                   5ff61e5bb5591       kube-apiserver-kubernetes-upgrade-696728
	2393388229dba       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   8 seconds ago       Running             etcd                      2                   3bd607b946b73       etcd-kubernetes-upgrade-696728
	e44c918fbf3e8       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   8 seconds ago       Running             kube-controller-manager   2                   ae1bf2c9613f4       kube-controller-manager-kubernetes-upgrade-696728
	0ecd6ed8c863a       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   8 seconds ago       Running             kube-scheduler            2                   c5293d5b5c4ac       kube-scheduler-kubernetes-upgrade-696728
	d16963765df8e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 seconds ago      Exited              storage-provisioner       2                   1d34f4a6b119f       storage-provisioner
	d68a260c5ac88       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   12 seconds ago      Running             kube-proxy                2                   57e945265a417       kube-proxy-vkhgq
	8cce8796663c6       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   25 seconds ago      Exited              coredns                   1                   47525047ca9d0       coredns-7c65d6cfc9-77zwp
	b5a73bcc709e9       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   26 seconds ago      Exited              coredns                   1                   8e2595df90c0d       coredns-7c65d6cfc9-6qst7
	85fa58375ff43       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   29 seconds ago      Exited              etcd                      1                   6948040c4d5ba       etcd-kubernetes-upgrade-696728
	7952e86f7d00d       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   29 seconds ago      Exited              kube-apiserver            1                   8551bc4a85d68       kube-apiserver-kubernetes-upgrade-696728
	6c4580ed2d058       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   29 seconds ago      Exited              kube-controller-manager   1                   86aaf4b3ebd36       kube-controller-manager-kubernetes-upgrade-696728
	cfeabc550d344       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   29 seconds ago      Exited              kube-scheduler            1                   9b828584ace54       kube-scheduler-kubernetes-upgrade-696728
	466037316faaf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   29 seconds ago      Exited              kube-proxy                1                   09803dbed6e65       kube-proxy-vkhgq
	
	
	==> coredns [64c83541283fd23d252e798602e62a310d6f29db5384200873f4f104c28d8338] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [8cce8796663c686272b664b73dceacf806cb551effaab61888969c0da2e6f730] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b5a73bcc709e9f29df914100696fb3c096111afd0a6fb981ac56f1ab141c5a03] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d96918cb88645b3a8412c0264388fafbeafcb627023d6b2d58177a649a4f12b1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-696728
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-696728
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 22:10:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-696728
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 22:11:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 22:11:11 +0000   Fri, 20 Sep 2024 22:10:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 22:11:11 +0000   Fri, 20 Sep 2024 22:10:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 22:11:11 +0000   Fri, 20 Sep 2024 22:10:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 22:11:11 +0000   Fri, 20 Sep 2024 22:10:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.192
	  Hostname:    kubernetes-upgrade-696728
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1995ff8819284806981d583ad0c565c7
	  System UUID:                1995ff88-1928-4806-981d-583ad0c565c7
	  Boot ID:                    7fd3456a-0fd1-4e29-854e-5c2f5292967a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6qst7                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     47s
	  kube-system                 coredns-7c65d6cfc9-77zwp                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     47s
	  kube-system                 etcd-kubernetes-upgrade-696728                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         48s
	  kube-system                 kube-apiserver-kubernetes-upgrade-696728             250m (12%)    0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-696728    200m (10%)    0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-proxy-vkhgq                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kube-system                 kube-scheduler-kubernetes-upgrade-696728             100m (5%)     0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 44s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  59s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node kubernetes-upgrade-696728 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node kubernetes-upgrade-696728 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x7 over 59s)  kubelet          Node kubernetes-upgrade-696728 status is now: NodeHasSufficientPID
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           47s                node-controller  Node kubernetes-upgrade-696728 event: Registered Node kubernetes-upgrade-696728 in Controller
	  Normal  Starting                 11s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  11s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10s (x8 over 11s)  kubelet          Node kubernetes-upgrade-696728 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x8 over 11s)  kubelet          Node kubernetes-upgrade-696728 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x7 over 11s)  kubelet          Node kubernetes-upgrade-696728 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node kubernetes-upgrade-696728 event: Registered Node kubernetes-upgrade-696728 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.218528] systemd-fstab-generator[557]: Ignoring "noauto" option for root device
	[  +0.072594] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054401] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.240196] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.121738] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.338092] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +4.696839] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +0.071855] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.909058] systemd-fstab-generator[839]: Ignoring "noauto" option for root device
	[  +7.221804] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.102403] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.461198] kauditd_printk_skb: 18 callbacks suppressed
	[ +13.370599] systemd-fstab-generator[2182]: Ignoring "noauto" option for root device
	[  +0.089807] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.078279] systemd-fstab-generator[2194]: Ignoring "noauto" option for root device
	[  +0.789236] systemd-fstab-generator[2422]: Ignoring "noauto" option for root device
	[  +0.463520] systemd-fstab-generator[2593]: Ignoring "noauto" option for root device
	[  +0.955879] systemd-fstab-generator[2886]: Ignoring "noauto" option for root device
	[  +1.383444] systemd-fstab-generator[3195]: Ignoring "noauto" option for root device
	[Sep20 22:11] kauditd_printk_skb: 300 callbacks suppressed
	[  +5.039749] kauditd_printk_skb: 1 callbacks suppressed
	[  +1.184435] systemd-fstab-generator[4017]: Ignoring "noauto" option for root device
	[  +4.317506] kauditd_printk_skb: 45 callbacks suppressed
	[  +2.034170] systemd-fstab-generator[4517]: Ignoring "noauto" option for root device
	
	
	==> etcd [2393388229dba5d9cdd1ae7bddb4efd8ca83c6f5c1fe117e79b8ea77f00ac027] <==
	{"level":"info","ts":"2024-09-20T22:11:08.795365Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T22:11:08.795657Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"745e8914bf86604b","initial-advertise-peer-urls":["https://192.168.50.192:2380"],"listen-peer-urls":["https://192.168.50.192:2380"],"advertise-client-urls":["https://192.168.50.192:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.192:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T22:11:08.795711Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T22:11:08.795933Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.192:2380"}
	{"level":"info","ts":"2024-09-20T22:11:08.795970Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.192:2380"}
	{"level":"info","ts":"2024-09-20T22:11:09.864308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"745e8914bf86604b is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-20T22:11:09.864410Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"745e8914bf86604b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-20T22:11:09.864457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"745e8914bf86604b received MsgPreVoteResp from 745e8914bf86604b at term 2"}
	{"level":"info","ts":"2024-09-20T22:11:09.864491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"745e8914bf86604b became candidate at term 3"}
	{"level":"info","ts":"2024-09-20T22:11:09.864516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"745e8914bf86604b received MsgVoteResp from 745e8914bf86604b at term 3"}
	{"level":"info","ts":"2024-09-20T22:11:09.864551Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"745e8914bf86604b became leader at term 3"}
	{"level":"info","ts":"2024-09-20T22:11:09.864576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 745e8914bf86604b elected leader 745e8914bf86604b at term 3"}
	{"level":"info","ts":"2024-09-20T22:11:09.870595Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"745e8914bf86604b","local-member-attributes":"{Name:kubernetes-upgrade-696728 ClientURLs:[https://192.168.50.192:2379]}","request-path":"/0/members/745e8914bf86604b/attributes","cluster-id":"4604bc02127d75fd","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T22:11:09.870607Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:11:09.870975Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T22:11:09.871011Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T22:11:09.870650Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:11:09.871801Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:11:09.871834Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:11:09.872669Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T22:11:09.873101Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.192:2379"}
	{"level":"info","ts":"2024-09-20T22:11:16.447295Z","caller":"traceutil/trace.go:171","msg":"trace[389024856] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"156.95777ms","start":"2024-09-20T22:11:16.290319Z","end":"2024-09-20T22:11:16.447277Z","steps":["trace[389024856] 'process raft request'  (duration: 156.818002ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:11:16.629984Z","caller":"traceutil/trace.go:171","msg":"trace[11261279] transaction","detail":"{read_only:false; response_revision:465; number_of_response:1; }","duration":"172.997644ms","start":"2024-09-20T22:11:16.456963Z","end":"2024-09-20T22:11:16.629961Z","steps":["trace[11261279] 'process raft request'  (duration: 154.753182ms)","trace[11261279] 'compare'  (duration: 18.078983ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T22:11:16.857676Z","caller":"traceutil/trace.go:171","msg":"trace[903066994] transaction","detail":"{read_only:false; response_revision:466; number_of_response:1; }","duration":"123.541193ms","start":"2024-09-20T22:11:16.734115Z","end":"2024-09-20T22:11:16.857656Z","steps":["trace[903066994] 'process raft request'  (duration: 119.427649ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:11:18.564160Z","caller":"traceutil/trace.go:171","msg":"trace[417289118] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"127.132159ms","start":"2024-09-20T22:11:18.437008Z","end":"2024-09-20T22:11:18.564141Z","steps":["trace[417289118] 'process raft request'  (duration: 126.995088ms)"],"step_count":1}
	
	
	==> etcd [85fa58375ff433726090b9dedcf6590989dc66511ec541f2f362e2bf6fdc5963] <==
	{"level":"warn","ts":"2024-09-20T22:10:48.522726Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-09-20T22:10:48.527954Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.50.192:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.50.192:2380","--initial-cluster=kubernetes-upgrade-696728=https://192.168.50.192:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.50.192:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.50.192:2380","--name=kubernetes-upgrade-696728","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--sna
pshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-09-20T22:10:48.530085Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-09-20T22:10:48.531392Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-09-20T22:10:48.531487Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.50.192:2380"]}
	{"level":"info","ts":"2024-09-20T22:10:48.531588Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T22:10:48.535129Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.192:2379"]}
	{"level":"info","ts":"2024-09-20T22:10:48.542180Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-696728","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.50.192:2380"],"listen-peer-urls":["https://192.168.50.192:2380"],"advertise-client-urls":["https://192.168.50.192:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.192:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new
","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	
	
	==> kernel <==
	 22:11:18 up 1 min,  0 users,  load average: 2.06, 0.64, 0.22
	Linux kubernetes-upgrade-696728 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7952e86f7d00de30a7c62f095aa7dddb762bb03d6d72802b3435f29862ae0c08] <==
	I0920 22:10:48.644481       1 options.go:228] external host was not specified, using 192.168.50.192
	I0920 22:10:48.671388       1 server.go:142] Version: v1.31.1
	I0920 22:10:48.671444       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [af81e0de77b145f5ea62640f88f8f43b6c0213061e813054254d0c60e8d7fe99] <==
	I0920 22:11:11.355127       1 policy_source.go:224] refreshing policies
	I0920 22:11:11.369261       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 22:11:11.432724       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0920 22:11:11.432895       1 shared_informer.go:320] Caches are synced for configmaps
	I0920 22:11:11.432984       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0920 22:11:11.433016       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0920 22:11:11.437722       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0920 22:11:11.437910       1 aggregator.go:171] initial CRD sync complete...
	I0920 22:11:11.437954       1 autoregister_controller.go:144] Starting autoregister controller
	I0920 22:11:11.437985       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0920 22:11:11.438015       1 cache.go:39] Caches are synced for autoregister controller
	I0920 22:11:11.439290       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0920 22:11:11.443553       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0920 22:11:11.447727       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0920 22:11:11.450041       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0920 22:11:11.450074       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0920 22:11:11.450144       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0920 22:11:12.267198       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0920 22:11:12.634372       1 controller.go:615] quota admission added evaluator for: endpoints
	I0920 22:11:13.677361       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0920 22:11:13.698784       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0920 22:11:13.767034       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0920 22:11:13.842764       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 22:11:13.852447       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0920 22:11:14.708445       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6c4580ed2d058b492f22258879803533a35fb46da3d06804d95af6208c6696c8] <==
	
	
	==> kube-controller-manager [e44c918fbf3e851f4171bbf499a60a4705a06a5fb774a75e8c60676727f40ea8] <==
	I0920 22:11:14.728069       1 shared_informer.go:320] Caches are synced for PVC protection
	I0920 22:11:14.728106       1 shared_informer.go:320] Caches are synced for GC
	I0920 22:11:14.728167       1 shared_informer.go:320] Caches are synced for taint
	I0920 22:11:14.728255       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0920 22:11:14.728379       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-696728"
	I0920 22:11:14.728453       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0920 22:11:14.729551       1 shared_informer.go:320] Caches are synced for daemon sets
	I0920 22:11:14.738974       1 shared_informer.go:320] Caches are synced for stateful set
	I0920 22:11:14.754027       1 shared_informer.go:320] Caches are synced for attach detach
	I0920 22:11:14.787954       1 shared_informer.go:320] Caches are synced for PV protection
	I0920 22:11:14.824619       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0920 22:11:14.824825       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="133.301µs"
	I0920 22:11:14.844193       1 shared_informer.go:320] Caches are synced for disruption
	I0920 22:11:14.867643       1 shared_informer.go:320] Caches are synced for endpoint
	I0920 22:11:14.873292       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0920 22:11:14.876041       1 shared_informer.go:320] Caches are synced for deployment
	I0920 22:11:14.904294       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 22:11:14.938095       1 shared_informer.go:320] Caches are synced for resource quota
	I0920 22:11:15.329273       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 22:11:15.329300       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0920 22:11:15.364981       1 shared_informer.go:320] Caches are synced for garbage collector
	I0920 22:11:16.450645       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="211.854651ms"
	I0920 22:11:16.450818       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="93.741µs"
	I0920 22:11:18.709477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="138.825127ms"
	I0920 22:11:18.709713       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.097µs"
	
	
	==> kube-proxy [466037316faafd9c1a59b4f684ba72778a91e1ce50a55abb6b6426e8c362c6ff] <==
	
	
	==> kube-proxy [d68a260c5ac88ec4be65f2c97ca10f74764283af39c29b131ca92007bcf9cef2] <==
	E0920 22:11:04.383280       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 22:11:04.385106       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-696728\": dial tcp 192.168.50.192:8443: connect: connection refused"
	E0920 22:11:05.424164       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-696728\": dial tcp 192.168.50.192:8443: connect: connection refused"
	E0920 22:11:07.548490       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-696728\": dial tcp 192.168.50.192:8443: connect: connection refused"
	I0920 22:11:11.712092       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.192"]
	E0920 22:11:11.712462       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 22:11:11.754128       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 22:11:11.754203       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 22:11:11.754244       1 server_linux.go:169] "Using iptables Proxier"
	I0920 22:11:11.758233       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 22:11:11.758813       1 server.go:483] "Version info" version="v1.31.1"
	I0920 22:11:11.758913       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 22:11:11.760479       1 config.go:199] "Starting service config controller"
	I0920 22:11:11.760538       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 22:11:11.760600       1 config.go:105] "Starting endpoint slice config controller"
	I0920 22:11:11.760629       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 22:11:11.761420       1 config.go:328] "Starting node config controller"
	I0920 22:11:11.763962       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 22:11:11.861228       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 22:11:11.861344       1 shared_informer.go:320] Caches are synced for service config
	I0920 22:11:11.864219       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0ecd6ed8c863ad084d9b83f9d418f90ef130f21a341361dcccc062a4f1165174] <==
	I0920 22:11:09.561342       1 serving.go:386] Generated self-signed cert in-memory
	W0920 22:11:11.345406       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 22:11:11.345551       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 22:11:11.345588       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 22:11:11.345618       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 22:11:11.385134       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 22:11:11.385255       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 22:11:11.389435       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 22:11:11.389545       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 22:11:11.389891       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 22:11:11.389992       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 22:11:11.491016       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [cfeabc550d344114be4dd61f45c469e70f4aeab71938db624415ee89f098a929] <==
	
	
	==> kubelet <==
	Sep 20 22:11:08 kubernetes-upgrade-696728 kubelet[4024]: E0920 22:11:08.289960    4024 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.192:8443: connect: connection refused" node="kubernetes-upgrade-696728"
	Sep 20 22:11:08 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:08.373255    4024 scope.go:117] "RemoveContainer" containerID="6c4580ed2d058b492f22258879803533a35fb46da3d06804d95af6208c6696c8"
	Sep 20 22:11:08 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:08.374067    4024 scope.go:117] "RemoveContainer" containerID="cfeabc550d344114be4dd61f45c469e70f4aeab71938db624415ee89f098a929"
	Sep 20 22:11:08 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:08.374349    4024 scope.go:117] "RemoveContainer" containerID="85fa58375ff433726090b9dedcf6590989dc66511ec541f2f362e2bf6fdc5963"
	Sep 20 22:11:08 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:08.375962    4024 scope.go:117] "RemoveContainer" containerID="7952e86f7d00de30a7c62f095aa7dddb762bb03d6d72802b3435f29862ae0c08"
	Sep 20 22:11:08 kubernetes-upgrade-696728 kubelet[4024]: E0920 22:11:08.509716    4024 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-696728?timeout=10s\": dial tcp 192.168.50.192:8443: connect: connection refused" interval="800ms"
	Sep 20 22:11:08 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:08.691939    4024 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-696728"
	Sep 20 22:11:08 kubernetes-upgrade-696728 kubelet[4024]: E0920 22:11:08.694588    4024 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.192:8443: connect: connection refused" node="kubernetes-upgrade-696728"
	Sep 20 22:11:09 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:09.496746    4024 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-696728"
	Sep 20 22:11:11 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:11.467093    4024 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-696728"
	Sep 20 22:11:11 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:11.467211    4024 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-696728"
	Sep 20 22:11:11 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:11.467244    4024 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 20 22:11:11 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:11.468375    4024 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 20 22:11:11 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:11.885197    4024 apiserver.go:52] "Watching apiserver"
	Sep 20 22:11:11 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:11.992244    4024 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 20 22:11:12 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:12.019209    4024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/815b34aa-872f-46eb-9c27-64452e0d49f9-tmp\") pod \"storage-provisioner\" (UID: \"815b34aa-872f-46eb-9c27-64452e0d49f9\") " pod="kube-system/storage-provisioner"
	Sep 20 22:11:12 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:12.019447    4024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b-lib-modules\") pod \"kube-proxy-vkhgq\" (UID: \"9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b\") " pod="kube-system/kube-proxy-vkhgq"
	Sep 20 22:11:12 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:12.019590    4024 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b-xtables-lock\") pod \"kube-proxy-vkhgq\" (UID: \"9d56fcfd-dd4f-4c9f-a030-f4e8b8aeb71b\") " pod="kube-system/kube-proxy-vkhgq"
	Sep 20 22:11:12 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:12.203301    4024 scope.go:117] "RemoveContainer" containerID="8cce8796663c686272b664b73dceacf806cb551effaab61888969c0da2e6f730"
	Sep 20 22:11:12 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:12.204367    4024 scope.go:117] "RemoveContainer" containerID="d16963765df8e63f6d6072fc881cf087d63f496683f6bdf7f60c48b7a4f5cb58"
	Sep 20 22:11:12 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:12.206969    4024 scope.go:117] "RemoveContainer" containerID="b5a73bcc709e9f29df914100696fb3c096111afd0a6fb981ac56f1ab141c5a03"
	Sep 20 22:11:16 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:16.215223    4024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 20 22:11:17 kubernetes-upgrade-696728 kubelet[4024]: E0920 22:11:17.987305    4024 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726870277986794350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:11:17 kubernetes-upgrade-696728 kubelet[4024]: E0920 22:11:17.987341    4024 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726870277986794350,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:11:18 kubernetes-upgrade-696728 kubelet[4024]: I0920 22:11:18.425810    4024 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [55a9dc7bcbe93b478af66cf86fb1d78f2bb4d3a0d2edac3a195875c39829b15c] <==
	I0920 22:11:12.541641       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 22:11:12.595753       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 22:11:12.595927       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 22:11:12.665134       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 22:11:12.665491       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-696728_8be39339-c6b7-43e5-bf78-3383f9143d5c!
	I0920 22:11:12.665749       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f96d729d-6e89-46e4-b62e-8edc8444dfc4", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-696728_8be39339-c6b7-43e5-bf78-3383f9143d5c became leader
	I0920 22:11:12.769459       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-696728_8be39339-c6b7-43e5-bf78-3383f9143d5c!
	
	
	==> storage-provisioner [d16963765df8e63f6d6072fc881cf087d63f496683f6bdf7f60c48b7a4f5cb58] <==
	I0920 22:11:06.295459       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0920 22:11:06.298141       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-696728 -n kubernetes-upgrade-696728
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-696728 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-696728" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-696728
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-696728: (1.352708956s)
--- FAIL: TestKubernetesUpgrade (411.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (285s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-593412 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-593412 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m44.708355508s)

                                                
                                                
-- stdout --
	* [old-k8s-version-593412] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-593412" primary control-plane node in "old-k8s-version-593412" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 22:10:31.562128   61195 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:10:31.562244   61195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:10:31.562253   61195 out.go:358] Setting ErrFile to fd 2...
	I0920 22:10:31.562258   61195 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:10:31.562433   61195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:10:31.563037   61195 out.go:352] Setting JSON to false
	I0920 22:10:31.563920   61195 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6780,"bootTime":1726863452,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:10:31.564025   61195 start.go:139] virtualization: kvm guest
	I0920 22:10:31.566322   61195 out.go:177] * [old-k8s-version-593412] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:10:31.567952   61195 notify.go:220] Checking for updates...
	I0920 22:10:31.567975   61195 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:10:31.569500   61195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:10:31.571072   61195 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:10:31.572304   61195 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:10:31.573401   61195 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:10:31.574474   61195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:10:31.576116   61195 config.go:182] Loaded profile config "cert-expiration-673228": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:10:31.576280   61195 config.go:182] Loaded profile config "cert-options-335786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:10:31.576404   61195 config.go:182] Loaded profile config "kubernetes-upgrade-696728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:10:31.576515   61195 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:10:31.611702   61195 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 22:10:31.612941   61195 start.go:297] selected driver: kvm2
	I0920 22:10:31.612956   61195 start.go:901] validating driver "kvm2" against <nil>
	I0920 22:10:31.612967   61195 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:10:31.613790   61195 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:10:31.613883   61195 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 22:10:31.631689   61195 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 22:10:31.631749   61195 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 22:10:31.632059   61195 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:10:31.632094   61195 cni.go:84] Creating CNI manager for ""
	I0920 22:10:31.632148   61195 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:10:31.632158   61195 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 22:10:31.632217   61195 start.go:340] cluster config:
	{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:10:31.632357   61195 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:10:31.634549   61195 out.go:177] * Starting "old-k8s-version-593412" primary control-plane node in "old-k8s-version-593412" cluster
	I0920 22:10:31.635904   61195 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:10:31.635977   61195 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 22:10:31.635991   61195 cache.go:56] Caching tarball of preloaded images
	I0920 22:10:31.636084   61195 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 22:10:31.636097   61195 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 22:10:31.636223   61195 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:10:31.636255   61195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json: {Name:mk099905874d4790d3cd5d40a42020a73df0478f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:10:31.636897   61195 start.go:360] acquireMachinesLock for old-k8s-version-593412: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:10:45.640257   61195 start.go:364] duration metric: took 14.003272272s to acquireMachinesLock for "old-k8s-version-593412"
	I0920 22:10:45.640349   61195 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:10:45.640468   61195 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 22:10:45.642513   61195 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 22:10:45.642738   61195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:10:45.642789   61195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:10:45.663541   61195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43127
	I0920 22:10:45.663964   61195 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:10:45.664612   61195 main.go:141] libmachine: Using API Version  1
	I0920 22:10:45.664634   61195 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:10:45.664979   61195 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:10:45.665198   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:10:45.665369   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:10:45.665551   61195 start.go:159] libmachine.API.Create for "old-k8s-version-593412" (driver="kvm2")
	I0920 22:10:45.665601   61195 client.go:168] LocalClient.Create starting
	I0920 22:10:45.665637   61195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 22:10:45.665680   61195 main.go:141] libmachine: Decoding PEM data...
	I0920 22:10:45.665702   61195 main.go:141] libmachine: Parsing certificate...
	I0920 22:10:45.665768   61195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 22:10:45.665795   61195 main.go:141] libmachine: Decoding PEM data...
	I0920 22:10:45.665810   61195 main.go:141] libmachine: Parsing certificate...
	I0920 22:10:45.665839   61195 main.go:141] libmachine: Running pre-create checks...
	I0920 22:10:45.665850   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .PreCreateCheck
	I0920 22:10:45.666243   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetConfigRaw
	I0920 22:10:45.666732   61195 main.go:141] libmachine: Creating machine...
	I0920 22:10:45.666751   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .Create
	I0920 22:10:45.666994   61195 main.go:141] libmachine: (old-k8s-version-593412) Creating KVM machine...
	I0920 22:10:45.668387   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found existing default KVM network
	I0920 22:10:45.669996   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:45.669811   61331 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b2:f2:5e} reservation:<nil>}
	I0920 22:10:45.670933   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:45.670839   61331 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9e:f3:10} reservation:<nil>}
	I0920 22:10:45.672026   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:45.671929   61331 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002850f0}
	I0920 22:10:45.672054   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | created network xml: 
	I0920 22:10:45.672067   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | <network>
	I0920 22:10:45.672076   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG |   <name>mk-old-k8s-version-593412</name>
	I0920 22:10:45.672090   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG |   <dns enable='no'/>
	I0920 22:10:45.672097   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG |   
	I0920 22:10:45.672107   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0920 22:10:45.672115   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG |     <dhcp>
	I0920 22:10:45.672153   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0920 22:10:45.672174   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG |     </dhcp>
	I0920 22:10:45.672185   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG |   </ip>
	I0920 22:10:45.672193   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG |   
	I0920 22:10:45.672202   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | </network>
	I0920 22:10:45.672210   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | 
	I0920 22:10:45.677676   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | trying to create private KVM network mk-old-k8s-version-593412 192.168.61.0/24...
	I0920 22:10:45.753070   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | private KVM network mk-old-k8s-version-593412 192.168.61.0/24 created
	I0920 22:10:45.753111   61195 main.go:141] libmachine: (old-k8s-version-593412) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412 ...
	I0920 22:10:45.753126   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:45.752996   61331 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:10:45.753192   61195 main.go:141] libmachine: (old-k8s-version-593412) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 22:10:45.753226   61195 main.go:141] libmachine: (old-k8s-version-593412) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 22:10:46.015026   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:46.014890   61331 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa...
	I0920 22:10:46.210076   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:46.209946   61331 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/old-k8s-version-593412.rawdisk...
	I0920 22:10:46.210105   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Writing magic tar header
	I0920 22:10:46.210143   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Writing SSH key tar header
	I0920 22:10:46.210181   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:46.210083   61331 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412 ...
	I0920 22:10:46.210207   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412
	I0920 22:10:46.210328   61195 main.go:141] libmachine: (old-k8s-version-593412) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412 (perms=drwx------)
	I0920 22:10:46.210357   61195 main.go:141] libmachine: (old-k8s-version-593412) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 22:10:46.210372   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 22:10:46.210390   61195 main.go:141] libmachine: (old-k8s-version-593412) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 22:10:46.210408   61195 main.go:141] libmachine: (old-k8s-version-593412) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 22:10:46.210419   61195 main.go:141] libmachine: (old-k8s-version-593412) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 22:10:46.210430   61195 main.go:141] libmachine: (old-k8s-version-593412) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 22:10:46.210437   61195 main.go:141] libmachine: (old-k8s-version-593412) Creating domain...
	I0920 22:10:46.210454   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:10:46.210463   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 22:10:46.210490   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 22:10:46.210503   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Checking permissions on dir: /home/jenkins
	I0920 22:10:46.210517   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Checking permissions on dir: /home
	I0920 22:10:46.210526   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Skipping /home - not owner
	I0920 22:10:46.211502   61195 main.go:141] libmachine: (old-k8s-version-593412) define libvirt domain using xml: 
	I0920 22:10:46.211523   61195 main.go:141] libmachine: (old-k8s-version-593412) <domain type='kvm'>
	I0920 22:10:46.211533   61195 main.go:141] libmachine: (old-k8s-version-593412)   <name>old-k8s-version-593412</name>
	I0920 22:10:46.211546   61195 main.go:141] libmachine: (old-k8s-version-593412)   <memory unit='MiB'>2200</memory>
	I0920 22:10:46.211554   61195 main.go:141] libmachine: (old-k8s-version-593412)   <vcpu>2</vcpu>
	I0920 22:10:46.211564   61195 main.go:141] libmachine: (old-k8s-version-593412)   <features>
	I0920 22:10:46.211572   61195 main.go:141] libmachine: (old-k8s-version-593412)     <acpi/>
	I0920 22:10:46.211599   61195 main.go:141] libmachine: (old-k8s-version-593412)     <apic/>
	I0920 22:10:46.211608   61195 main.go:141] libmachine: (old-k8s-version-593412)     <pae/>
	I0920 22:10:46.211613   61195 main.go:141] libmachine: (old-k8s-version-593412)     
	I0920 22:10:46.211638   61195 main.go:141] libmachine: (old-k8s-version-593412)   </features>
	I0920 22:10:46.211660   61195 main.go:141] libmachine: (old-k8s-version-593412)   <cpu mode='host-passthrough'>
	I0920 22:10:46.211670   61195 main.go:141] libmachine: (old-k8s-version-593412)   
	I0920 22:10:46.211679   61195 main.go:141] libmachine: (old-k8s-version-593412)   </cpu>
	I0920 22:10:46.211703   61195 main.go:141] libmachine: (old-k8s-version-593412)   <os>
	I0920 22:10:46.211715   61195 main.go:141] libmachine: (old-k8s-version-593412)     <type>hvm</type>
	I0920 22:10:46.211734   61195 main.go:141] libmachine: (old-k8s-version-593412)     <boot dev='cdrom'/>
	I0920 22:10:46.211750   61195 main.go:141] libmachine: (old-k8s-version-593412)     <boot dev='hd'/>
	I0920 22:10:46.211759   61195 main.go:141] libmachine: (old-k8s-version-593412)     <bootmenu enable='no'/>
	I0920 22:10:46.211765   61195 main.go:141] libmachine: (old-k8s-version-593412)   </os>
	I0920 22:10:46.211773   61195 main.go:141] libmachine: (old-k8s-version-593412)   <devices>
	I0920 22:10:46.211780   61195 main.go:141] libmachine: (old-k8s-version-593412)     <disk type='file' device='cdrom'>
	I0920 22:10:46.211792   61195 main.go:141] libmachine: (old-k8s-version-593412)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/boot2docker.iso'/>
	I0920 22:10:46.211800   61195 main.go:141] libmachine: (old-k8s-version-593412)       <target dev='hdc' bus='scsi'/>
	I0920 22:10:46.211810   61195 main.go:141] libmachine: (old-k8s-version-593412)       <readonly/>
	I0920 22:10:46.211830   61195 main.go:141] libmachine: (old-k8s-version-593412)     </disk>
	I0920 22:10:46.211839   61195 main.go:141] libmachine: (old-k8s-version-593412)     <disk type='file' device='disk'>
	I0920 22:10:46.211847   61195 main.go:141] libmachine: (old-k8s-version-593412)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 22:10:46.211861   61195 main.go:141] libmachine: (old-k8s-version-593412)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/old-k8s-version-593412.rawdisk'/>
	I0920 22:10:46.211868   61195 main.go:141] libmachine: (old-k8s-version-593412)       <target dev='hda' bus='virtio'/>
	I0920 22:10:46.211874   61195 main.go:141] libmachine: (old-k8s-version-593412)     </disk>
	I0920 22:10:46.211881   61195 main.go:141] libmachine: (old-k8s-version-593412)     <interface type='network'>
	I0920 22:10:46.211900   61195 main.go:141] libmachine: (old-k8s-version-593412)       <source network='mk-old-k8s-version-593412'/>
	I0920 22:10:46.211923   61195 main.go:141] libmachine: (old-k8s-version-593412)       <model type='virtio'/>
	I0920 22:10:46.211932   61195 main.go:141] libmachine: (old-k8s-version-593412)     </interface>
	I0920 22:10:46.211938   61195 main.go:141] libmachine: (old-k8s-version-593412)     <interface type='network'>
	I0920 22:10:46.211946   61195 main.go:141] libmachine: (old-k8s-version-593412)       <source network='default'/>
	I0920 22:10:46.211953   61195 main.go:141] libmachine: (old-k8s-version-593412)       <model type='virtio'/>
	I0920 22:10:46.211961   61195 main.go:141] libmachine: (old-k8s-version-593412)     </interface>
	I0920 22:10:46.211968   61195 main.go:141] libmachine: (old-k8s-version-593412)     <serial type='pty'>
	I0920 22:10:46.211975   61195 main.go:141] libmachine: (old-k8s-version-593412)       <target port='0'/>
	I0920 22:10:46.211981   61195 main.go:141] libmachine: (old-k8s-version-593412)     </serial>
	I0920 22:10:46.211990   61195 main.go:141] libmachine: (old-k8s-version-593412)     <console type='pty'>
	I0920 22:10:46.212002   61195 main.go:141] libmachine: (old-k8s-version-593412)       <target type='serial' port='0'/>
	I0920 22:10:46.212010   61195 main.go:141] libmachine: (old-k8s-version-593412)     </console>
	I0920 22:10:46.212016   61195 main.go:141] libmachine: (old-k8s-version-593412)     <rng model='virtio'>
	I0920 22:10:46.212025   61195 main.go:141] libmachine: (old-k8s-version-593412)       <backend model='random'>/dev/random</backend>
	I0920 22:10:46.212032   61195 main.go:141] libmachine: (old-k8s-version-593412)     </rng>
	I0920 22:10:46.212039   61195 main.go:141] libmachine: (old-k8s-version-593412)     
	I0920 22:10:46.212046   61195 main.go:141] libmachine: (old-k8s-version-593412)     
	I0920 22:10:46.212054   61195 main.go:141] libmachine: (old-k8s-version-593412)   </devices>
	I0920 22:10:46.212060   61195 main.go:141] libmachine: (old-k8s-version-593412) </domain>
	I0920 22:10:46.212069   61195 main.go:141] libmachine: (old-k8s-version-593412) 
	I0920 22:10:46.216509   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:f2:39:fa in network default
	I0920 22:10:46.217103   61195 main.go:141] libmachine: (old-k8s-version-593412) Ensuring networks are active...
	I0920 22:10:46.217128   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:46.217821   61195 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network default is active
	I0920 22:10:46.218243   61195 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network mk-old-k8s-version-593412 is active
	I0920 22:10:46.218815   61195 main.go:141] libmachine: (old-k8s-version-593412) Getting domain xml...
	I0920 22:10:46.219679   61195 main.go:141] libmachine: (old-k8s-version-593412) Creating domain...
	I0920 22:10:47.626222   61195 main.go:141] libmachine: (old-k8s-version-593412) Waiting to get IP...
	I0920 22:10:47.627225   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:47.627788   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:10:47.627813   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:47.627773   61331 retry.go:31] will retry after 310.808996ms: waiting for machine to come up
	I0920 22:10:47.940497   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:47.941113   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:10:47.941248   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:47.941190   61331 retry.go:31] will retry after 305.075758ms: waiting for machine to come up
	I0920 22:10:48.247714   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:48.248312   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:10:48.248342   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:48.248266   61331 retry.go:31] will retry after 297.080577ms: waiting for machine to come up
	I0920 22:10:48.547063   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:48.547703   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:10:48.547741   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:48.547644   61331 retry.go:31] will retry after 502.566104ms: waiting for machine to come up
	I0920 22:10:49.051531   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:49.052149   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:10:49.052171   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:49.052094   61331 retry.go:31] will retry after 497.215322ms: waiting for machine to come up
	I0920 22:10:49.550772   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:49.551280   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:10:49.551400   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:49.551352   61331 retry.go:31] will retry after 790.444362ms: waiting for machine to come up
	I0920 22:10:50.342917   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:50.343440   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:10:50.343456   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:50.343413   61331 retry.go:31] will retry after 916.276521ms: waiting for machine to come up
	I0920 22:10:51.260899   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:51.261401   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:10:51.261460   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:51.261389   61331 retry.go:31] will retry after 1.366940244s: waiting for machine to come up
	I0920 22:10:52.630007   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:52.630538   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:10:52.630569   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:52.630479   61331 retry.go:31] will retry after 1.347446229s: waiting for machine to come up
	I0920 22:10:53.979873   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:53.980455   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:10:53.980481   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:53.980403   61331 retry.go:31] will retry after 1.583851577s: waiting for machine to come up
	I0920 22:10:55.566284   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:55.566732   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:10:55.566762   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:55.566675   61331 retry.go:31] will retry after 1.837610862s: waiting for machine to come up
	I0920 22:10:57.406775   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:10:57.407466   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:10:57.407496   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:10:57.407418   61331 retry.go:31] will retry after 3.169825194s: waiting for machine to come up
	I0920 22:11:00.580086   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:00.580574   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:11:00.580600   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:11:00.580532   61331 retry.go:31] will retry after 3.614679602s: waiting for machine to come up
	I0920 22:11:04.196429   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:04.196864   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:11:04.196894   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:11:04.196792   61331 retry.go:31] will retry after 4.148422946s: waiting for machine to come up
	I0920 22:11:08.347985   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.348514   61195 main.go:141] libmachine: (old-k8s-version-593412) Found IP for machine: 192.168.61.2
	I0920 22:11:08.348548   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has current primary IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.348557   61195 main.go:141] libmachine: (old-k8s-version-593412) Reserving static IP address...
	I0920 22:11:08.348932   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"} in network mk-old-k8s-version-593412
	I0920 22:11:08.427387   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Getting to WaitForSSH function...
	I0920 22:11:08.427439   61195 main.go:141] libmachine: (old-k8s-version-593412) Reserved static IP address: 192.168.61.2
	I0920 22:11:08.427453   61195 main.go:141] libmachine: (old-k8s-version-593412) Waiting for SSH to be available...
	I0920 22:11:08.430504   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.430876   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:08.430912   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.431099   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH client type: external
	I0920 22:11:08.431138   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa (-rw-------)
	I0920 22:11:08.431177   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:11:08.431202   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | About to run SSH command:
	I0920 22:11:08.431228   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | exit 0
	I0920 22:11:08.563455   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | SSH cmd err, output: <nil>: 
	I0920 22:11:08.563763   61195 main.go:141] libmachine: (old-k8s-version-593412) KVM machine creation complete!
	I0920 22:11:08.564069   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetConfigRaw
	I0920 22:11:08.564718   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:08.564912   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:08.565070   61195 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 22:11:08.565087   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetState
	I0920 22:11:08.566385   61195 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 22:11:08.566398   61195 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 22:11:08.566403   61195 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 22:11:08.566408   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:08.568838   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.569214   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:08.569242   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.569365   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:08.569523   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.569659   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.569810   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:08.569937   61195 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:08.570220   61195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:11:08.570239   61195 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 22:11:08.690857   61195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:11:08.690885   61195 main.go:141] libmachine: Detecting the provisioner...
	I0920 22:11:08.690895   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:08.694148   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.694544   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:08.694569   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.694785   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:08.695034   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.695216   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.695373   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:08.695526   61195 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:08.695702   61195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:11:08.695712   61195 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 22:11:08.820566   61195 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 22:11:08.820684   61195 main.go:141] libmachine: found compatible host: buildroot
	I0920 22:11:08.820699   61195 main.go:141] libmachine: Provisioning with buildroot...
	I0920 22:11:08.820714   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:11:08.820996   61195 buildroot.go:166] provisioning hostname "old-k8s-version-593412"
	I0920 22:11:08.821019   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:11:08.821200   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:08.824131   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.824550   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:08.824578   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.824777   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:08.825017   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.825223   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.825359   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:08.825548   61195 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:08.825808   61195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:11:08.825827   61195 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-593412 && echo "old-k8s-version-593412" | sudo tee /etc/hostname
	I0920 22:11:08.957512   61195 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-593412
	
	I0920 22:11:08.957544   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:08.961239   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.961721   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:08.961751   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:08.961968   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:08.962163   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.962380   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:08.962532   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:08.962680   61195 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:08.962841   61195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:11:08.962857   61195 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-593412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-593412/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-593412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:11:09.085061   61195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:11:09.085116   61195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:11:09.085142   61195 buildroot.go:174] setting up certificates
	I0920 22:11:09.085156   61195 provision.go:84] configureAuth start
	I0920 22:11:09.085173   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:11:09.085477   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:11:09.088394   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.088792   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.088829   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.088999   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.091337   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.091652   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.091692   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.091795   61195 provision.go:143] copyHostCerts
	I0920 22:11:09.091850   61195 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:11:09.091872   61195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:11:09.091950   61195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:11:09.092127   61195 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:11:09.092139   61195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:11:09.092172   61195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:11:09.092262   61195 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:11:09.092272   61195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:11:09.092298   61195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:11:09.092375   61195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-593412 san=[127.0.0.1 192.168.61.2 localhost minikube old-k8s-version-593412]
	I0920 22:11:09.241309   61195 provision.go:177] copyRemoteCerts
	I0920 22:11:09.241372   61195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:11:09.241399   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.244141   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.244454   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.244493   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.244643   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:09.244821   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.244994   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:09.245145   61195 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:11:09.336260   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 22:11:09.362480   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 22:11:09.389045   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:11:09.413697   61195 provision.go:87] duration metric: took 328.525516ms to configureAuth
	I0920 22:11:09.413745   61195 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:11:09.413938   61195 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:11:09.414015   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.416615   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.416962   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.417007   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.417150   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:09.417305   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.417482   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.417658   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:09.417835   61195 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:09.418071   61195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:11:09.418092   61195 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:11:09.669740   61195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:11:09.669825   61195 main.go:141] libmachine: Checking connection to Docker...
	I0920 22:11:09.669840   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetURL
	I0920 22:11:09.671417   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using libvirt version 6000000
	I0920 22:11:09.674306   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.674626   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.674655   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.674855   61195 main.go:141] libmachine: Docker is up and running!
	I0920 22:11:09.674872   61195 main.go:141] libmachine: Reticulating splines...
	I0920 22:11:09.674880   61195 client.go:171] duration metric: took 24.009268702s to LocalClient.Create
	I0920 22:11:09.674906   61195 start.go:167] duration metric: took 24.009358408s to libmachine.API.Create "old-k8s-version-593412"
	I0920 22:11:09.674917   61195 start.go:293] postStartSetup for "old-k8s-version-593412" (driver="kvm2")
	I0920 22:11:09.674929   61195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:11:09.674951   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:09.675202   61195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:11:09.675244   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.677490   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.677808   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.677836   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.677987   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:09.678183   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.678361   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:09.678517   61195 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:11:09.765383   61195 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:11:09.769766   61195 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:11:09.769789   61195 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:11:09.769851   61195 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:11:09.769947   61195 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:11:09.770066   61195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:11:09.780182   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:11:09.806621   61195 start.go:296] duration metric: took 131.691184ms for postStartSetup
	I0920 22:11:09.806665   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetConfigRaw
	I0920 22:11:09.807293   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:11:09.810021   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.810446   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.810477   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.810693   61195 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:11:09.810905   61195 start.go:128] duration metric: took 24.170425111s to createHost
	I0920 22:11:09.810926   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.813525   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.813841   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.813877   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.814089   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:09.814262   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.814426   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.814522   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:09.814657   61195 main.go:141] libmachine: Using SSH client type: native
	I0920 22:11:09.814884   61195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:11:09.814896   61195 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:11:09.933043   61195 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870269.901927309
	
	I0920 22:11:09.933071   61195 fix.go:216] guest clock: 1726870269.901927309
	I0920 22:11:09.933081   61195 fix.go:229] Guest: 2024-09-20 22:11:09.901927309 +0000 UTC Remote: 2024-09-20 22:11:09.810916219 +0000 UTC m=+38.285344566 (delta=91.01109ms)
	I0920 22:11:09.933137   61195 fix.go:200] guest clock delta is within tolerance: 91.01109ms
	I0920 22:11:09.933149   61195 start.go:83] releasing machines lock for "old-k8s-version-593412", held for 24.292835276s
	I0920 22:11:09.933212   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:09.933524   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:11:09.936810   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.937169   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.937198   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.937400   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:09.937885   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:09.938087   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:11:09.938196   61195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:11:09.938239   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.938335   61195 ssh_runner.go:195] Run: cat /version.json
	I0920 22:11:09.938360   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:11:09.941530   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.941718   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.941960   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.941997   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.942264   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:09.942422   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:09.942430   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.942483   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:09.942593   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:11:09.942662   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:09.942753   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:11:09.942914   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:11:09.942915   61195 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:11:09.943049   61195 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:11:10.028373   61195 ssh_runner.go:195] Run: systemctl --version
	I0920 22:11:10.057467   61195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:11:10.236787   61195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:11:10.246471   61195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:11:10.246548   61195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:11:10.265996   61195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:11:10.266028   61195 start.go:495] detecting cgroup driver to use...
	I0920 22:11:10.266106   61195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:11:10.289030   61195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:11:10.307350   61195 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:11:10.307436   61195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:11:10.324682   61195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:11:10.343455   61195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:11:10.484982   61195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:11:10.652002   61195 docker.go:233] disabling docker service ...
	I0920 22:11:10.652072   61195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:11:10.668019   61195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:11:10.683298   61195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:11:10.840832   61195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:11:10.997730   61195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:11:11.013827   61195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:11:11.037710   61195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 22:11:11.037779   61195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:11:11.050522   61195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:11:11.050607   61195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:11:11.062222   61195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:11:11.072953   61195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:11:11.084283   61195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:11:11.096046   61195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:11:11.109800   61195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:11:11.109875   61195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:11:11.131367   61195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:11:11.145546   61195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:11:11.268716   61195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:11:11.393820   61195 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:11:11.393919   61195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:11:11.399333   61195 start.go:563] Will wait 60s for crictl version
	I0920 22:11:11.399402   61195 ssh_runner.go:195] Run: which crictl
	I0920 22:11:11.404727   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:11:11.461428   61195 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:11:11.461515   61195 ssh_runner.go:195] Run: crio --version
	I0920 22:11:11.492371   61195 ssh_runner.go:195] Run: crio --version
	I0920 22:11:11.526738   61195 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 22:11:11.528094   61195 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:11:11.531895   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:11.533623   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:11:01 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:11:11.533648   61195 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:11:11.533878   61195 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 22:11:11.538932   61195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:11:11.553400   61195 kubeadm.go:883] updating cluster {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:11:11.553520   61195 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:11:11.553570   61195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:11:11.588456   61195 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:11:11.588532   61195 ssh_runner.go:195] Run: which lz4
	I0920 22:11:11.593230   61195 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:11:11.597798   61195 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:11:11.597834   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 22:11:13.454227   61195 crio.go:462] duration metric: took 1.861025841s to copy over tarball
	I0920 22:11:13.454327   61195 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:11:16.446728   61195 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.992360699s)
	I0920 22:11:16.446768   61195 crio.go:469] duration metric: took 2.992493221s to extract the tarball
	I0920 22:11:16.446776   61195 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:11:16.512697   61195 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:11:16.612476   61195 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:11:16.612500   61195 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:11:16.612561   61195 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:11:16.612581   61195 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:11:16.612608   61195 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:11:16.612638   61195 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 22:11:16.612665   61195 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 22:11:16.612679   61195 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:11:16.612689   61195 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:11:16.612809   61195 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:11:16.614326   61195 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:11:16.614328   61195 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 22:11:16.614326   61195 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:11:16.614328   61195 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:11:16.614372   61195 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 22:11:16.614331   61195 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:11:16.614382   61195 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:11:16.614334   61195 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:11:16.776046   61195 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 22:11:16.785501   61195 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:11:16.791595   61195 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:11:16.795423   61195 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 22:11:16.803137   61195 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:11:16.816674   61195 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:11:16.829242   61195 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 22:11:16.875757   61195 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 22:11:16.875804   61195 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:11:16.875850   61195 ssh_runner.go:195] Run: which crictl
	I0920 22:11:16.940082   61195 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 22:11:16.940120   61195 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:11:16.940161   61195 ssh_runner.go:195] Run: which crictl
	I0920 22:11:17.013044   61195 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 22:11:17.013098   61195 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:11:17.013111   61195 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 22:11:17.013142   61195 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 22:11:17.013149   61195 ssh_runner.go:195] Run: which crictl
	I0920 22:11:17.013189   61195 ssh_runner.go:195] Run: which crictl
	I0920 22:11:17.013143   61195 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 22:11:17.013222   61195 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:11:17.013234   61195 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 22:11:17.013263   61195 ssh_runner.go:195] Run: which crictl
	I0920 22:11:17.013301   61195 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 22:11:17.013318   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:11:17.013265   61195 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:11:17.013331   61195 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 22:11:17.013345   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:11:17.013352   61195 ssh_runner.go:195] Run: which crictl
	I0920 22:11:17.013357   61195 ssh_runner.go:195] Run: which crictl
	I0920 22:11:17.033662   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:11:17.033740   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:11:17.101033   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:11:17.101084   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:11:17.101113   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:11:17.101154   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:11:17.101121   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:11:17.135514   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:11:17.155894   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:11:17.253648   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:11:17.256305   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:11:17.283288   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:11:17.289283   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:11:17.289411   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:11:17.312255   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:11:17.319837   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:11:17.429384   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:11:17.429485   61195 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 22:11:17.465062   61195 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:11:17.470920   61195 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 22:11:17.470980   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:11:17.470987   61195 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:11:17.508423   61195 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 22:11:17.512072   61195 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 22:11:17.587365   61195 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 22:11:17.700228   61195 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 22:11:17.700253   61195 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 22:11:17.700352   61195 cache_images.go:92] duration metric: took 1.087840742s to LoadCachedImages
	W0920 22:11:17.700429   61195 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0920 22:11:17.700446   61195 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.20.0 crio true true} ...
	I0920 22:11:17.700572   61195 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-593412 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:11:17.700665   61195 ssh_runner.go:195] Run: crio config
	I0920 22:11:17.765582   61195 cni.go:84] Creating CNI manager for ""
	I0920 22:11:17.765607   61195 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:11:17.765618   61195 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:11:17.765643   61195 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-593412 NodeName:old-k8s-version-593412 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 22:11:17.765817   61195 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-593412"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:11:17.765893   61195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 22:11:17.776924   61195 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:11:17.777022   61195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:11:17.788314   61195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0920 22:11:17.807049   61195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:11:17.828747   61195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0920 22:11:17.851690   61195 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0920 22:11:17.856366   61195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:11:17.870373   61195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:11:18.008072   61195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:11:18.029077   61195 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412 for IP: 192.168.61.2
	I0920 22:11:18.029104   61195 certs.go:194] generating shared ca certs ...
	I0920 22:11:18.029126   61195 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:18.029296   61195 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:11:18.029350   61195 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:11:18.029364   61195 certs.go:256] generating profile certs ...
	I0920 22:11:18.029432   61195 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.key
	I0920 22:11:18.029449   61195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt with IP's: []
	I0920 22:11:18.125137   61195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt ...
	I0920 22:11:18.125171   61195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: {Name:mkf27b12b0b52b30cad3667f9240addbe98128d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:18.161801   61195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.key ...
	I0920 22:11:18.161842   61195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.key: {Name:mk9f85268888a8a9cf947e0391bcb071aa5febf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:18.162104   61195 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key.2074c1da
	I0920 22:11:18.162128   61195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.crt.2074c1da with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.2]
	I0920 22:11:18.295857   61195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.crt.2074c1da ...
	I0920 22:11:18.295906   61195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.crt.2074c1da: {Name:mk41ed0925408b62da1fcc4695e69d900c5406be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:18.296094   61195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key.2074c1da ...
	I0920 22:11:18.296110   61195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key.2074c1da: {Name:mkdb5559385e232f54db2f54c0d8a41d6092bd68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:18.296218   61195 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.crt.2074c1da -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.crt
	I0920 22:11:18.296317   61195 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key.2074c1da -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key
	I0920 22:11:18.296408   61195 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key
	I0920 22:11:18.296434   61195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.crt with IP's: []
	I0920 22:11:18.503543   61195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.crt ...
	I0920 22:11:18.503573   61195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.crt: {Name:mke9d525e6b46d9f419cd9a1f9b224910b83b90d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:18.503775   61195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key ...
	I0920 22:11:18.503795   61195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key: {Name:mka12173ec14d9ca2888d9e3ce6bd2e19437648e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:11:18.504031   61195 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:11:18.504073   61195 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:11:18.504081   61195 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:11:18.504103   61195 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:11:18.504133   61195 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:11:18.504159   61195 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:11:18.504198   61195 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:11:18.504847   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:11:18.535395   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:11:18.563749   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:11:18.592681   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:11:18.620880   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 22:11:18.652399   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:11:18.682525   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:11:18.718053   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:11:18.751528   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:11:18.783927   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:11:18.814757   61195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:11:18.849868   61195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:11:18.873627   61195 ssh_runner.go:195] Run: openssl version
	I0920 22:11:18.882672   61195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:11:18.900895   61195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:11:18.908857   61195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:11:18.908943   61195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:11:18.917788   61195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:11:18.936073   61195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:11:18.956589   61195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:11:18.965157   61195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:11:18.965224   61195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:11:18.972824   61195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:11:18.990165   61195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:11:19.006540   61195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:11:19.013219   61195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:11:19.013288   61195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:11:19.026759   61195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:11:19.046931   61195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:11:19.052767   61195 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 22:11:19.052837   61195 kubeadm.go:392] StartCluster: {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:11:19.052958   61195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:11:19.053021   61195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:11:19.104733   61195 cri.go:89] found id: ""
	I0920 22:11:19.104828   61195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:11:19.117341   61195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:11:19.128289   61195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:11:19.143223   61195 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:11:19.143243   61195 kubeadm.go:157] found existing configuration files:
	
	I0920 22:11:19.143297   61195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:11:19.158884   61195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:11:19.159454   61195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:11:19.171541   61195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:11:19.182287   61195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:11:19.182356   61195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:11:19.192461   61195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:11:19.205853   61195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:11:19.205922   61195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:11:19.217694   61195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:11:19.229948   61195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:11:19.230014   61195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:11:19.241704   61195 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:11:19.571908   61195 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:13:17.700857   61195 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:13:17.700937   61195 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 22:13:17.702478   61195 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:13:17.702558   61195 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:13:17.702666   61195 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:13:17.702790   61195 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:13:17.702912   61195 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:13:17.703033   61195 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:13:17.704933   61195 out.go:235]   - Generating certificates and keys ...
	I0920 22:13:17.705024   61195 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:13:17.705116   61195 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:13:17.705213   61195 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 22:13:17.705286   61195 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 22:13:17.705364   61195 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 22:13:17.705439   61195 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 22:13:17.705510   61195 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 22:13:17.705643   61195 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-593412] and IPs [192.168.61.2 127.0.0.1 ::1]
	I0920 22:13:17.705718   61195 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 22:13:17.705875   61195 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-593412] and IPs [192.168.61.2 127.0.0.1 ::1]
	I0920 22:13:17.705967   61195 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 22:13:17.706052   61195 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 22:13:17.706115   61195 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 22:13:17.706186   61195 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:13:17.706263   61195 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:13:17.706317   61195 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:13:17.706372   61195 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:13:17.706445   61195 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:13:17.706586   61195 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:13:17.706666   61195 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:13:17.706710   61195 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:13:17.706772   61195 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:13:17.708865   61195 out.go:235]   - Booting up control plane ...
	I0920 22:13:17.708979   61195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:13:17.709073   61195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:13:17.709158   61195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:13:17.709264   61195 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:13:17.709488   61195 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:13:17.709568   61195 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:13:17.709675   61195 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:13:17.709924   61195 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:13:17.709987   61195 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:13:17.710146   61195 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:13:17.710204   61195 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:13:17.710460   61195 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:13:17.710553   61195 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:13:17.710768   61195 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:13:17.710855   61195 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:13:17.711057   61195 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:13:17.711067   61195 kubeadm.go:310] 
	I0920 22:13:17.711101   61195 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:13:17.711147   61195 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:13:17.711153   61195 kubeadm.go:310] 
	I0920 22:13:17.711182   61195 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:13:17.711221   61195 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:13:17.711334   61195 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:13:17.711343   61195 kubeadm.go:310] 
	I0920 22:13:17.711471   61195 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:13:17.711507   61195 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:13:17.711536   61195 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:13:17.711542   61195 kubeadm.go:310] 
	I0920 22:13:17.711639   61195 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:13:17.711709   61195 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:13:17.711721   61195 kubeadm.go:310] 
	I0920 22:13:17.711827   61195 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:13:17.711904   61195 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:13:17.712019   61195 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:13:17.712129   61195 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:13:17.712158   61195 kubeadm.go:310] 
	W0920 22:13:17.712274   61195 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-593412] and IPs [192.168.61.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-593412] and IPs [192.168.61.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-593412] and IPs [192.168.61.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-593412] and IPs [192.168.61.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 22:13:17.712321   61195 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:13:18.778862   61195 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.066515421s)
	I0920 22:13:18.778941   61195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:13:18.793701   61195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:13:18.804265   61195 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:13:18.804290   61195 kubeadm.go:157] found existing configuration files:
	
	I0920 22:13:18.804335   61195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:13:18.813970   61195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:13:18.814044   61195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:13:18.824460   61195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:13:18.834029   61195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:13:18.834108   61195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:13:18.844022   61195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:13:18.853549   61195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:13:18.853608   61195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:13:18.863414   61195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:13:18.872777   61195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:13:18.872840   61195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:13:18.883011   61195 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:13:19.124761   61195 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:15:15.567844   61195 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:15:15.567924   61195 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 22:15:15.569420   61195 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:15:15.569497   61195 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:15:15.569592   61195 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:15:15.569675   61195 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:15:15.569761   61195 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:15:15.569814   61195 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:15:15.572413   61195 out.go:235]   - Generating certificates and keys ...
	I0920 22:15:15.572499   61195 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:15:15.572566   61195 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:15:15.572653   61195 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:15:15.572738   61195 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:15:15.572832   61195 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:15:15.572909   61195 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:15:15.573011   61195 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:15:15.573097   61195 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:15:15.573196   61195 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:15:15.573302   61195 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:15:15.573359   61195 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:15:15.573435   61195 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:15:15.573508   61195 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:15:15.573593   61195 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:15:15.573655   61195 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:15:15.573728   61195 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:15:15.573854   61195 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:15:15.573974   61195 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:15:15.574025   61195 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:15:15.574110   61195 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:15:15.575275   61195 out.go:235]   - Booting up control plane ...
	I0920 22:15:15.575366   61195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:15:15.575450   61195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:15:15.575546   61195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:15:15.575647   61195 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:15:15.575814   61195 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:15:15.575883   61195 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:15:15.575963   61195 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:15:15.576175   61195 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:15:15.576255   61195 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:15:15.576446   61195 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:15:15.576518   61195 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:15:15.576684   61195 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:15:15.576745   61195 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:15:15.576907   61195 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:15:15.576973   61195 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:15:15.577143   61195 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:15:15.577154   61195 kubeadm.go:310] 
	I0920 22:15:15.577212   61195 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:15:15.577249   61195 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:15:15.577259   61195 kubeadm.go:310] 
	I0920 22:15:15.577289   61195 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:15:15.577318   61195 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:15:15.577405   61195 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:15:15.577413   61195 kubeadm.go:310] 
	I0920 22:15:15.577539   61195 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:15:15.577597   61195 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:15:15.577647   61195 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:15:15.577657   61195 kubeadm.go:310] 
	I0920 22:15:15.577741   61195 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:15:15.577812   61195 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:15:15.577818   61195 kubeadm.go:310] 
	I0920 22:15:15.577909   61195 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:15:15.577984   61195 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:15:15.578055   61195 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:15:15.578136   61195 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:15:15.578218   61195 kubeadm.go:310] 
	I0920 22:15:15.578222   61195 kubeadm.go:394] duration metric: took 3m56.52538682s to StartCluster
	I0920 22:15:15.578267   61195 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:15:15.578319   61195 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:15:15.621956   61195 cri.go:89] found id: ""
	I0920 22:15:15.621984   61195 logs.go:276] 0 containers: []
	W0920 22:15:15.621994   61195 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:15:15.622008   61195 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:15:15.622065   61195 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:15:15.670134   61195 cri.go:89] found id: ""
	I0920 22:15:15.670161   61195 logs.go:276] 0 containers: []
	W0920 22:15:15.670171   61195 logs.go:278] No container was found matching "etcd"
	I0920 22:15:15.670178   61195 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:15:15.670241   61195 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:15:15.706653   61195 cri.go:89] found id: ""
	I0920 22:15:15.706689   61195 logs.go:276] 0 containers: []
	W0920 22:15:15.706697   61195 logs.go:278] No container was found matching "coredns"
	I0920 22:15:15.706702   61195 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:15:15.706757   61195 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:15:15.740843   61195 cri.go:89] found id: ""
	I0920 22:15:15.740875   61195 logs.go:276] 0 containers: []
	W0920 22:15:15.740883   61195 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:15:15.740889   61195 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:15:15.740946   61195 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:15:15.774608   61195 cri.go:89] found id: ""
	I0920 22:15:15.774634   61195 logs.go:276] 0 containers: []
	W0920 22:15:15.774642   61195 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:15:15.774648   61195 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:15:15.774711   61195 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:15:15.809236   61195 cri.go:89] found id: ""
	I0920 22:15:15.809270   61195 logs.go:276] 0 containers: []
	W0920 22:15:15.809279   61195 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:15:15.809285   61195 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:15:15.809339   61195 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:15:15.845748   61195 cri.go:89] found id: ""
	I0920 22:15:15.845780   61195 logs.go:276] 0 containers: []
	W0920 22:15:15.845788   61195 logs.go:278] No container was found matching "kindnet"
	I0920 22:15:15.845797   61195 logs.go:123] Gathering logs for container status ...
	I0920 22:15:15.845808   61195 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:15:15.896058   61195 logs.go:123] Gathering logs for kubelet ...
	I0920 22:15:15.896085   61195 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:15:15.965900   61195 logs.go:123] Gathering logs for dmesg ...
	I0920 22:15:15.965952   61195 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:15:15.987373   61195 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:15:15.987399   61195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:15:16.111974   61195 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:15:16.112007   61195 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:15:16.112026   61195 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W0920 22:15:16.216792   61195 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 22:15:16.216859   61195 out.go:270] * 
	* 
	W0920 22:15:16.216914   61195 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:15:16.216932   61195 out.go:270] * 
	* 
	W0920 22:15:16.217929   61195 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 22:15:16.221586   61195 out.go:201] 
	W0920 22:15:16.223079   61195 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:15:16.223138   61195 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 22:15:16.223155   61195 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 22:15:16.224747   61195 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-593412 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412: exit status 6 (229.079959ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:15:16.503898   64170 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-593412" does not appear in /home/jenkins/minikube-integration/19672-9614/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-593412" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (285.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-032884 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-032884 --alsologtostderr -v=3: exit status 82 (2m0.802261562s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-032884"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 22:12:26.546089   62808 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:12:26.546229   62808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:12:26.546239   62808 out.go:358] Setting ErrFile to fd 2...
	I0920 22:12:26.546246   62808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:12:26.546516   62808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:12:26.546825   62808 out.go:352] Setting JSON to false
	I0920 22:12:26.546932   62808 mustload.go:65] Loading cluster: no-preload-032884
	I0920 22:12:26.547458   62808 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:12:26.547566   62808 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/config.json ...
	I0920 22:12:26.547809   62808 mustload.go:65] Loading cluster: no-preload-032884
	I0920 22:12:26.547955   62808 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:12:26.547999   62808 stop.go:39] StopHost: no-preload-032884
	I0920 22:12:26.548580   62808 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:12:26.548631   62808 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:12:26.564504   62808 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35833
	I0920 22:12:26.564987   62808 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:12:26.565587   62808 main.go:141] libmachine: Using API Version  1
	I0920 22:12:26.565611   62808 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:12:26.565943   62808 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:12:26.569684   62808 out.go:177] * Stopping node "no-preload-032884"  ...
	I0920 22:12:26.571163   62808 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 22:12:26.571191   62808 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:12:26.571434   62808 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 22:12:26.571457   62808 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:12:26.574695   62808 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:12:26.575157   62808 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:11:26 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:12:26.575184   62808 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:12:26.575326   62808 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:12:26.575485   62808 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:12:26.575578   62808 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:12:26.575650   62808 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:12:26.678428   62808 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 22:12:26.742428   62808 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 22:12:26.829817   62808 main.go:141] libmachine: Stopping "no-preload-032884"...
	I0920 22:12:26.829846   62808 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:12:26.831592   62808 main.go:141] libmachine: (no-preload-032884) Calling .Stop
	I0920 22:12:26.835750   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 0/120
	I0920 22:12:27.837042   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 1/120
	I0920 22:12:28.838426   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 2/120
	I0920 22:12:29.839889   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 3/120
	I0920 22:12:30.841542   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 4/120
	I0920 22:12:31.843683   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 5/120
	I0920 22:12:32.845645   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 6/120
	I0920 22:12:33.847201   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 7/120
	I0920 22:12:34.849527   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 8/120
	I0920 22:12:35.851016   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 9/120
	I0920 22:12:36.852905   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 10/120
	I0920 22:12:37.854312   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 11/120
	I0920 22:12:38.855800   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 12/120
	I0920 22:12:39.857277   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 13/120
	I0920 22:12:40.858649   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 14/120
	I0920 22:12:41.860638   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 15/120
	I0920 22:12:42.862124   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 16/120
	I0920 22:12:43.863924   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 17/120
	I0920 22:12:44.866355   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 18/120
	I0920 22:12:45.868497   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 19/120
	I0920 22:12:46.870646   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 20/120
	I0920 22:12:47.871935   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 21/120
	I0920 22:12:48.873571   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 22/120
	I0920 22:12:49.875132   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 23/120
	I0920 22:12:50.876568   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 24/120
	I0920 22:12:51.878401   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 25/120
	I0920 22:12:52.879838   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 26/120
	I0920 22:12:53.881872   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 27/120
	I0920 22:12:54.883324   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 28/120
	I0920 22:12:55.884694   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 29/120
	I0920 22:12:56.886729   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 30/120
	I0920 22:12:57.888170   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 31/120
	I0920 22:12:58.889493   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 32/120
	I0920 22:12:59.891455   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 33/120
	I0920 22:13:00.892678   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 34/120
	I0920 22:13:01.894314   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 35/120
	I0920 22:13:02.895926   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 36/120
	I0920 22:13:03.897216   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 37/120
	I0920 22:13:04.898550   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 38/120
	I0920 22:13:05.899784   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 39/120
	I0920 22:13:07.153446   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 40/120
	I0920 22:13:08.155118   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 41/120
	I0920 22:13:09.157799   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 42/120
	I0920 22:13:10.159544   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 43/120
	I0920 22:13:11.161562   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 44/120
	I0920 22:13:12.163642   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 45/120
	I0920 22:13:13.165400   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 46/120
	I0920 22:13:14.166921   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 47/120
	I0920 22:13:15.169171   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 48/120
	I0920 22:13:16.170993   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 49/120
	I0920 22:13:17.173456   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 50/120
	I0920 22:13:18.174914   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 51/120
	I0920 22:13:19.176410   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 52/120
	I0920 22:13:20.178726   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 53/120
	I0920 22:13:21.180068   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 54/120
	I0920 22:13:22.181997   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 55/120
	I0920 22:13:23.183555   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 56/120
	I0920 22:13:24.184932   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 57/120
	I0920 22:13:25.186252   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 58/120
	I0920 22:13:26.187644   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 59/120
	I0920 22:13:27.189623   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 60/120
	I0920 22:13:28.191267   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 61/120
	I0920 22:13:29.193787   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 62/120
	I0920 22:13:30.195271   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 63/120
	I0920 22:13:31.197673   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 64/120
	I0920 22:13:32.199644   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 65/120
	I0920 22:13:33.201300   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 66/120
	I0920 22:13:34.202782   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 67/120
	I0920 22:13:35.204775   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 68/120
	I0920 22:13:36.206111   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 69/120
	I0920 22:13:37.208278   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 70/120
	I0920 22:13:38.209577   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 71/120
	I0920 22:13:39.211707   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 72/120
	I0920 22:13:40.213841   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 73/120
	I0920 22:13:41.215298   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 74/120
	I0920 22:13:42.217670   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 75/120
	I0920 22:13:43.219681   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 76/120
	I0920 22:13:44.222138   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 77/120
	I0920 22:13:45.223602   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 78/120
	I0920 22:13:46.225460   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 79/120
	I0920 22:13:47.227979   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 80/120
	I0920 22:13:48.229373   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 81/120
	I0920 22:13:49.230759   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 82/120
	I0920 22:13:50.232502   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 83/120
	I0920 22:13:51.233891   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 84/120
	I0920 22:13:52.235873   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 85/120
	I0920 22:13:53.237369   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 86/120
	I0920 22:13:54.238721   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 87/120
	I0920 22:13:55.240360   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 88/120
	I0920 22:13:56.241634   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 89/120
	I0920 22:13:57.243967   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 90/120
	I0920 22:13:58.245587   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 91/120
	I0920 22:13:59.246744   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 92/120
	I0920 22:14:00.248009   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 93/120
	I0920 22:14:01.249272   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 94/120
	I0920 22:14:02.251049   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 95/120
	I0920 22:14:03.252667   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 96/120
	I0920 22:14:04.254193   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 97/120
	I0920 22:14:05.255795   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 98/120
	I0920 22:14:06.257736   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 99/120
	I0920 22:14:07.260205   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 100/120
	I0920 22:14:08.261660   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 101/120
	I0920 22:14:09.263277   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 102/120
	I0920 22:14:10.265365   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 103/120
	I0920 22:14:11.266867   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 104/120
	I0920 22:14:12.268935   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 105/120
	I0920 22:14:13.270754   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 106/120
	I0920 22:14:14.272117   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 107/120
	I0920 22:14:15.273472   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 108/120
	I0920 22:14:16.275229   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 109/120
	I0920 22:14:17.277654   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 110/120
	I0920 22:14:18.279054   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 111/120
	I0920 22:14:19.280477   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 112/120
	I0920 22:14:20.282002   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 113/120
	I0920 22:14:21.283410   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 114/120
	I0920 22:14:22.285716   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 115/120
	I0920 22:14:23.287211   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 116/120
	I0920 22:14:24.288677   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 117/120
	I0920 22:14:25.290162   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 118/120
	I0920 22:14:26.291585   62808 main.go:141] libmachine: (no-preload-032884) Waiting for machine to stop 119/120
	I0920 22:14:27.292269   62808 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 22:14:27.292334   62808 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 22:14:27.294457   62808 out.go:201] 
	W0920 22:14:27.295913   62808 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 22:14:27.295941   62808 out.go:270] * 
	* 
	W0920 22:14:27.299126   62808 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 22:14:27.300626   62808 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-032884 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-032884 -n no-preload-032884
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-032884 -n no-preload-032884: exit status 3 (18.556687113s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:14:45.859334   63851 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.19:22: connect: no route to host
	E0920 22:14:45.859359   63851 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.19:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-032884" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-562750 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-562750 --alsologtostderr -v=3: exit status 82 (2m0.52516989s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-562750"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 22:13:16.002395   63397 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:13:16.002664   63397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:13:16.002674   63397 out.go:358] Setting ErrFile to fd 2...
	I0920 22:13:16.002679   63397 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:13:16.002911   63397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:13:16.003194   63397 out.go:352] Setting JSON to false
	I0920 22:13:16.003287   63397 mustload.go:65] Loading cluster: embed-certs-562750
	I0920 22:13:16.003670   63397 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:13:16.003738   63397 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/config.json ...
	I0920 22:13:16.003909   63397 mustload.go:65] Loading cluster: embed-certs-562750
	I0920 22:13:16.004011   63397 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:13:16.004038   63397 stop.go:39] StopHost: embed-certs-562750
	I0920 22:13:16.004386   63397 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:13:16.004429   63397 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:13:16.019356   63397 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34759
	I0920 22:13:16.019894   63397 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:13:16.020427   63397 main.go:141] libmachine: Using API Version  1
	I0920 22:13:16.020452   63397 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:13:16.020925   63397 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:13:16.023918   63397 out.go:177] * Stopping node "embed-certs-562750"  ...
	I0920 22:13:16.026101   63397 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 22:13:16.026149   63397 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:13:16.026525   63397 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 22:13:16.026555   63397 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:13:16.030092   63397 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:13:16.030592   63397 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:11:49 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:13:16.030623   63397 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:13:16.030839   63397 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:13:16.031050   63397 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:13:16.031206   63397 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:13:16.031347   63397 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:13:16.152037   63397 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 22:13:16.213007   63397 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 22:13:16.272778   63397 main.go:141] libmachine: Stopping "embed-certs-562750"...
	I0920 22:13:16.272806   63397 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:13:16.274564   63397 main.go:141] libmachine: (embed-certs-562750) Calling .Stop
	I0920 22:13:16.279074   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 0/120
	I0920 22:13:17.281128   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 1/120
	I0920 22:13:18.282547   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 2/120
	I0920 22:13:19.284361   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 3/120
	I0920 22:13:20.286358   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 4/120
	I0920 22:13:21.288359   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 5/120
	I0920 22:13:22.289883   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 6/120
	I0920 22:13:23.291371   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 7/120
	I0920 22:13:24.292855   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 8/120
	I0920 22:13:25.294235   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 9/120
	I0920 22:13:26.296677   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 10/120
	I0920 22:13:27.298110   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 11/120
	I0920 22:13:28.299793   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 12/120
	I0920 22:13:29.301439   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 13/120
	I0920 22:13:30.303147   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 14/120
	I0920 22:13:31.305139   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 15/120
	I0920 22:13:32.306557   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 16/120
	I0920 22:13:33.308243   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 17/120
	I0920 22:13:34.309512   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 18/120
	I0920 22:13:35.310917   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 19/120
	I0920 22:13:36.312353   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 20/120
	I0920 22:13:37.313777   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 21/120
	I0920 22:13:38.315116   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 22/120
	I0920 22:13:39.316459   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 23/120
	I0920 22:13:40.317870   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 24/120
	I0920 22:13:41.319812   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 25/120
	I0920 22:13:42.321270   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 26/120
	I0920 22:13:43.322766   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 27/120
	I0920 22:13:44.324429   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 28/120
	I0920 22:13:45.325923   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 29/120
	I0920 22:13:46.328400   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 30/120
	I0920 22:13:47.329755   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 31/120
	I0920 22:13:48.331314   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 32/120
	I0920 22:13:49.332629   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 33/120
	I0920 22:13:50.334068   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 34/120
	I0920 22:13:51.336147   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 35/120
	I0920 22:13:52.337816   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 36/120
	I0920 22:13:53.339322   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 37/120
	I0920 22:13:54.340848   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 38/120
	I0920 22:13:55.342499   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 39/120
	I0920 22:13:56.345050   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 40/120
	I0920 22:13:57.346606   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 41/120
	I0920 22:13:58.347966   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 42/120
	I0920 22:13:59.349509   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 43/120
	I0920 22:14:00.351009   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 44/120
	I0920 22:14:01.352963   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 45/120
	I0920 22:14:02.354633   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 46/120
	I0920 22:14:03.356031   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 47/120
	I0920 22:14:04.357327   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 48/120
	I0920 22:14:05.358829   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 49/120
	I0920 22:14:06.360861   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 50/120
	I0920 22:14:07.362173   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 51/120
	I0920 22:14:08.363735   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 52/120
	I0920 22:14:09.365278   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 53/120
	I0920 22:14:10.366832   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 54/120
	I0920 22:14:11.368959   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 55/120
	I0920 22:14:12.370577   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 56/120
	I0920 22:14:13.372507   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 57/120
	I0920 22:14:14.373857   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 58/120
	I0920 22:14:15.375489   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 59/120
	I0920 22:14:16.376909   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 60/120
	I0920 22:14:17.378504   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 61/120
	I0920 22:14:18.380668   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 62/120
	I0920 22:14:19.382196   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 63/120
	I0920 22:14:20.383681   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 64/120
	I0920 22:14:21.385972   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 65/120
	I0920 22:14:22.387520   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 66/120
	I0920 22:14:23.389013   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 67/120
	I0920 22:14:24.390435   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 68/120
	I0920 22:14:25.391718   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 69/120
	I0920 22:14:26.393693   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 70/120
	I0920 22:14:27.395049   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 71/120
	I0920 22:14:28.396398   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 72/120
	I0920 22:14:29.397848   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 73/120
	I0920 22:14:30.399323   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 74/120
	I0920 22:14:31.401290   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 75/120
	I0920 22:14:32.402641   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 76/120
	I0920 22:14:33.404012   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 77/120
	I0920 22:14:34.405414   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 78/120
	I0920 22:14:35.406735   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 79/120
	I0920 22:14:36.409160   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 80/120
	I0920 22:14:37.410443   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 81/120
	I0920 22:14:38.411970   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 82/120
	I0920 22:14:39.413503   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 83/120
	I0920 22:14:40.415040   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 84/120
	I0920 22:14:41.417066   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 85/120
	I0920 22:14:42.418499   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 86/120
	I0920 22:14:43.419998   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 87/120
	I0920 22:14:44.421517   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 88/120
	I0920 22:14:45.423043   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 89/120
	I0920 22:14:46.425113   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 90/120
	I0920 22:14:47.426415   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 91/120
	I0920 22:14:48.427859   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 92/120
	I0920 22:14:49.429318   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 93/120
	I0920 22:14:50.430675   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 94/120
	I0920 22:14:51.432855   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 95/120
	I0920 22:14:52.434289   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 96/120
	I0920 22:14:53.435660   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 97/120
	I0920 22:14:54.437154   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 98/120
	I0920 22:14:55.438784   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 99/120
	I0920 22:14:56.441045   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 100/120
	I0920 22:14:57.442747   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 101/120
	I0920 22:14:58.444440   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 102/120
	I0920 22:14:59.445941   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 103/120
	I0920 22:15:00.447515   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 104/120
	I0920 22:15:01.449755   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 105/120
	I0920 22:15:02.451188   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 106/120
	I0920 22:15:03.452589   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 107/120
	I0920 22:15:04.454014   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 108/120
	I0920 22:15:05.455431   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 109/120
	I0920 22:15:06.457882   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 110/120
	I0920 22:15:07.459225   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 111/120
	I0920 22:15:08.461647   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 112/120
	I0920 22:15:09.463224   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 113/120
	I0920 22:15:10.465678   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 114/120
	I0920 22:15:11.467807   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 115/120
	I0920 22:15:12.469542   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 116/120
	I0920 22:15:13.470857   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 117/120
	I0920 22:15:14.472435   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 118/120
	I0920 22:15:15.473916   63397 main.go:141] libmachine: (embed-certs-562750) Waiting for machine to stop 119/120
	I0920 22:15:16.475336   63397 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 22:15:16.475383   63397 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 22:15:16.477605   63397 out.go:201] 
	W0920 22:15:16.479031   63397 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 22:15:16.479051   63397 out.go:270] * 
	* 
	W0920 22:15:16.482085   63397 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 22:15:16.483416   63397 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-562750 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-562750 -n embed-certs-562750
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-562750 -n embed-certs-562750: exit status 3 (18.525233519s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:15:35.011313   64202 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.193:22: connect: no route to host
	E0920 22:15:35.011333   64202 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.193:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-562750" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-799783 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-799783 --alsologtostderr -v=3: exit status 82 (2m0.529231111s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-799783"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 22:14:13.755396   63785 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:14:13.755654   63785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:14:13.755663   63785 out.go:358] Setting ErrFile to fd 2...
	I0920 22:14:13.755668   63785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:14:13.755880   63785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:14:13.756149   63785 out.go:352] Setting JSON to false
	I0920 22:14:13.756261   63785 mustload.go:65] Loading cluster: default-k8s-diff-port-799783
	I0920 22:14:13.756668   63785 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:14:13.756745   63785 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/config.json ...
	I0920 22:14:13.756918   63785 mustload.go:65] Loading cluster: default-k8s-diff-port-799783
	I0920 22:14:13.757051   63785 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:14:13.757092   63785 stop.go:39] StopHost: default-k8s-diff-port-799783
	I0920 22:14:13.757534   63785 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:14:13.757576   63785 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:14:13.772452   63785 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34373
	I0920 22:14:13.773031   63785 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:14:13.773619   63785 main.go:141] libmachine: Using API Version  1
	I0920 22:14:13.773642   63785 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:14:13.774044   63785 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:14:13.776503   63785 out.go:177] * Stopping node "default-k8s-diff-port-799783"  ...
	I0920 22:14:13.777725   63785 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0920 22:14:13.777749   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:14:13.777957   63785 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0920 22:14:13.777978   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:14:13.780727   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:14:13.781139   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:13:22 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:14:13.781167   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:14:13.781318   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:14:13.781518   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:14:13.781667   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:14:13.781802   63785 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:14:13.903670   63785 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0920 22:14:13.973702   63785 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0920 22:14:14.041236   63785 main.go:141] libmachine: Stopping "default-k8s-diff-port-799783"...
	I0920 22:14:14.041271   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:14:14.043080   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Stop
	I0920 22:14:14.046280   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 0/120
	I0920 22:14:15.047703   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 1/120
	I0920 22:14:16.049568   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 2/120
	I0920 22:14:17.051052   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 3/120
	I0920 22:14:18.052403   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 4/120
	I0920 22:14:19.054344   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 5/120
	I0920 22:14:20.055855   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 6/120
	I0920 22:14:21.057340   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 7/120
	I0920 22:14:22.058884   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 8/120
	I0920 22:14:23.060937   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 9/120
	I0920 22:14:24.062419   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 10/120
	I0920 22:14:25.063820   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 11/120
	I0920 22:14:26.065007   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 12/120
	I0920 22:14:27.066486   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 13/120
	I0920 22:14:28.067869   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 14/120
	I0920 22:14:29.069779   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 15/120
	I0920 22:14:30.071065   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 16/120
	I0920 22:14:31.072251   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 17/120
	I0920 22:14:32.073622   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 18/120
	I0920 22:14:33.075058   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 19/120
	I0920 22:14:34.077632   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 20/120
	I0920 22:14:35.078918   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 21/120
	I0920 22:14:36.080351   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 22/120
	I0920 22:14:37.081650   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 23/120
	I0920 22:14:38.083115   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 24/120
	I0920 22:14:39.085335   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 25/120
	I0920 22:14:40.086848   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 26/120
	I0920 22:14:41.088341   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 27/120
	I0920 22:14:42.089917   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 28/120
	I0920 22:14:43.091409   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 29/120
	I0920 22:14:44.093493   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 30/120
	I0920 22:14:45.094925   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 31/120
	I0920 22:14:46.096276   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 32/120
	I0920 22:14:47.097792   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 33/120
	I0920 22:14:48.099235   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 34/120
	I0920 22:14:49.101023   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 35/120
	I0920 22:14:50.102376   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 36/120
	I0920 22:14:51.103875   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 37/120
	I0920 22:14:52.105218   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 38/120
	I0920 22:14:53.106854   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 39/120
	I0920 22:14:54.109419   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 40/120
	I0920 22:14:55.111068   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 41/120
	I0920 22:14:56.112461   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 42/120
	I0920 22:14:57.114139   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 43/120
	I0920 22:14:58.115822   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 44/120
	I0920 22:14:59.118255   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 45/120
	I0920 22:15:00.119754   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 46/120
	I0920 22:15:01.121190   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 47/120
	I0920 22:15:02.122614   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 48/120
	I0920 22:15:03.124174   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 49/120
	I0920 22:15:04.126083   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 50/120
	I0920 22:15:05.127452   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 51/120
	I0920 22:15:06.128902   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 52/120
	I0920 22:15:07.130598   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 53/120
	I0920 22:15:08.131994   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 54/120
	I0920 22:15:09.134137   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 55/120
	I0920 22:15:10.135547   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 56/120
	I0920 22:15:11.137029   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 57/120
	I0920 22:15:12.138326   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 58/120
	I0920 22:15:13.140026   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 59/120
	I0920 22:15:14.141762   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 60/120
	I0920 22:15:15.143153   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 61/120
	I0920 22:15:16.145330   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 62/120
	I0920 22:15:17.146698   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 63/120
	I0920 22:15:18.148197   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 64/120
	I0920 22:15:19.150268   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 65/120
	I0920 22:15:20.151741   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 66/120
	I0920 22:15:21.153235   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 67/120
	I0920 22:15:22.154652   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 68/120
	I0920 22:15:23.156198   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 69/120
	I0920 22:15:24.157618   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 70/120
	I0920 22:15:25.159020   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 71/120
	I0920 22:15:26.160385   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 72/120
	I0920 22:15:27.162060   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 73/120
	I0920 22:15:28.163319   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 74/120
	I0920 22:15:29.165574   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 75/120
	I0920 22:15:30.167178   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 76/120
	I0920 22:15:31.168786   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 77/120
	I0920 22:15:32.170498   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 78/120
	I0920 22:15:33.172057   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 79/120
	I0920 22:15:34.174365   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 80/120
	I0920 22:15:35.175877   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 81/120
	I0920 22:15:36.177501   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 82/120
	I0920 22:15:37.179055   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 83/120
	I0920 22:15:38.180578   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 84/120
	I0920 22:15:39.182711   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 85/120
	I0920 22:15:40.184141   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 86/120
	I0920 22:15:41.185685   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 87/120
	I0920 22:15:42.187127   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 88/120
	I0920 22:15:43.188417   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 89/120
	I0920 22:15:44.189822   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 90/120
	I0920 22:15:45.191176   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 91/120
	I0920 22:15:46.192717   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 92/120
	I0920 22:15:47.194171   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 93/120
	I0920 22:15:48.195803   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 94/120
	I0920 22:15:49.198093   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 95/120
	I0920 22:15:50.199527   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 96/120
	I0920 22:15:51.200917   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 97/120
	I0920 22:15:52.202386   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 98/120
	I0920 22:15:53.203826   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 99/120
	I0920 22:15:54.205896   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 100/120
	I0920 22:15:55.207445   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 101/120
	I0920 22:15:56.208880   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 102/120
	I0920 22:15:57.210363   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 103/120
	I0920 22:15:58.211684   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 104/120
	I0920 22:15:59.213365   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 105/120
	I0920 22:16:00.214814   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 106/120
	I0920 22:16:01.217131   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 107/120
	I0920 22:16:02.218417   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 108/120
	I0920 22:16:03.219864   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 109/120
	I0920 22:16:04.222165   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 110/120
	I0920 22:16:05.223629   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 111/120
	I0920 22:16:06.225007   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 112/120
	I0920 22:16:07.226444   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 113/120
	I0920 22:16:08.228017   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 114/120
	I0920 22:16:09.230112   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 115/120
	I0920 22:16:10.231758   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 116/120
	I0920 22:16:11.233176   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 117/120
	I0920 22:16:12.234633   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 118/120
	I0920 22:16:13.236051   63785 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for machine to stop 119/120
	I0920 22:16:14.236609   63785 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0920 22:16:14.236675   63785 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0920 22:16:14.238509   63785 out.go:201] 
	W0920 22:16:14.239767   63785 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0920 22:16:14.239786   63785 out.go:270] * 
	* 
	W0920 22:16:14.242868   63785 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 22:16:14.244216   63785 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-799783 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799783 -n default-k8s-diff-port-799783
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799783 -n default-k8s-diff-port-799783: exit status 3 (18.620747151s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:16:32.867305   64658 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host
	E0920 22:16:32.867327   64658 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-799783" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-032884 -n no-preload-032884
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-032884 -n no-preload-032884: exit status 3 (3.167933153s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:14:49.027354   63948 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.19:22: connect: no route to host
	E0920 22:14:49.027432   63948 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.19:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-032884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-032884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154944103s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.19:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-032884 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-032884 -n no-preload-032884
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-032884 -n no-preload-032884: exit status 3 (3.060823819s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:14:58.243373   64029 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.19:22: connect: no route to host
	E0920 22:14:58.243402   64029 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.19:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-032884" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-593412 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-593412 create -f testdata/busybox.yaml: exit status 1 (48.272694ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-593412" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-593412 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412: exit status 6 (222.404826ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:15:16.777114   64236 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-593412" does not appear in /home/jenkins/minikube-integration/19672-9614/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-593412" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412: exit status 6 (225.873784ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:15:17.003126   64288 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-593412" does not appear in /home/jenkins/minikube-integration/19672-9614/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-593412" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (102.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-593412 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-593412 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m42.053286403s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-593412 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-593412 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-593412 describe deploy/metrics-server -n kube-system: exit status 1 (42.994912ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-593412" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-593412 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412: exit status 6 (223.286544ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:16:59.323976   65025 status.go:451] kubeconfig endpoint: get endpoint: "old-k8s-version-593412" does not appear in /home/jenkins/minikube-integration/19672-9614/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-593412" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (102.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-562750 -n embed-certs-562750
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-562750 -n embed-certs-562750: exit status 3 (3.167536571s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:15:38.179327   64415 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.193:22: connect: no route to host
	E0920 22:15:38.179348   64415 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.193:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-562750 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-562750 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153000484s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.193:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-562750 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-562750 -n embed-certs-562750
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-562750 -n embed-certs-562750: exit status 3 (3.062852525s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:15:47.395391   64496 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.193:22: connect: no route to host
	E0920 22:15:47.395410   64496 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.193:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-562750" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799783 -n default-k8s-diff-port-799783
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799783 -n default-k8s-diff-port-799783: exit status 3 (3.167700608s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:16:36.035336   64820 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host
	E0920 22:16:36.035371   64820 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-799783 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-799783 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.156134755s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-799783 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799783 -n default-k8s-diff-port-799783
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799783 -n default-k8s-diff-port-799783: exit status 3 (3.05945197s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 22:16:45.251347   64900 status.go:410] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host
	E0920 22:16:45.251368   64900 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-799783" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (705.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-593412 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0920 22:19:11.809116   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:20:09.131914   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:24:11.809363   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-593412 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m41.865864511s)

                                                
                                                
-- stdout --
	* [old-k8s-version-593412] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-593412" primary control-plane node in "old-k8s-version-593412" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-593412" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 22:17:05.861648   65164 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:17:05.861774   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.861787   65164 out.go:358] Setting ErrFile to fd 2...
	I0920 22:17:05.861792   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.862021   65164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:17:05.862599   65164 out.go:352] Setting JSON to false
	I0920 22:17:05.863528   65164 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7174,"bootTime":1726863452,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:17:05.863630   65164 start.go:139] virtualization: kvm guest
	I0920 22:17:05.866077   65164 out.go:177] * [old-k8s-version-593412] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:17:05.867484   65164 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:17:05.867531   65164 notify.go:220] Checking for updates...
	I0920 22:17:05.869987   65164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:17:05.871483   65164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:17:05.872745   65164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:17:05.874062   65164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:17:05.875089   65164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:17:05.876578   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:17:05.876975   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.877047   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.891847   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I0920 22:17:05.892301   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.892821   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.892841   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.893134   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.893312   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.895266   65164 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 22:17:05.896322   65164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:17:05.896637   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.896671   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.912301   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37681
	I0920 22:17:05.912667   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.913170   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.913189   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.913475   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.913672   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.949827   65164 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 22:17:05.951258   65164 start.go:297] selected driver: kvm2
	I0920 22:17:05.951275   65164 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.951380   65164 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:17:05.952032   65164 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.952098   65164 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 22:17:05.967645   65164 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 22:17:05.968048   65164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:17:05.968078   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:17:05.968122   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:17:05.968161   65164 start.go:340] cluster config:
	{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.968267   65164 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.970080   65164 out.go:177] * Starting "old-k8s-version-593412" primary control-plane node in "old-k8s-version-593412" cluster
	I0920 22:17:05.971180   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:17:05.971218   65164 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 22:17:05.971227   65164 cache.go:56] Caching tarball of preloaded images
	I0920 22:17:05.971292   65164 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 22:17:05.971301   65164 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 22:17:05.971394   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:17:05.971564   65164 start.go:360] acquireMachinesLock for old-k8s-version-593412: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:20:15.124170   65164 start.go:364] duration metric: took 3m9.152575828s to acquireMachinesLock for "old-k8s-version-593412"
	I0920 22:20:15.124246   65164 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:20:15.124258   65164 fix.go:54] fixHost starting: 
	I0920 22:20:15.124618   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:15.124669   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:15.144438   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0920 22:20:15.144935   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:15.145417   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:20:15.145446   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:15.145757   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:15.145943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:15.146103   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetState
	I0920 22:20:15.147740   65164 fix.go:112] recreateIfNeeded on old-k8s-version-593412: state=Stopped err=<nil>
	I0920 22:20:15.147770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	W0920 22:20:15.147919   65164 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:20:15.150120   65164 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-593412" ...
	I0920 22:20:15.151593   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .Start
	I0920 22:20:15.151807   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring networks are active...
	I0920 22:20:15.152666   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network default is active
	I0920 22:20:15.153017   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network mk-old-k8s-version-593412 is active
	I0920 22:20:15.153520   65164 main.go:141] libmachine: (old-k8s-version-593412) Getting domain xml...
	I0920 22:20:15.154306   65164 main.go:141] libmachine: (old-k8s-version-593412) Creating domain...
	I0920 22:20:16.512533   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting to get IP...
	I0920 22:20:16.513609   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.514030   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.514105   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.514023   66497 retry.go:31] will retry after 283.450738ms: waiting for machine to come up
	I0920 22:20:16.799590   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.800093   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.800125   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.800032   66497 retry.go:31] will retry after 325.332719ms: waiting for machine to come up
	I0920 22:20:17.127313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.127784   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.127809   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.127762   66497 retry.go:31] will retry after 407.300726ms: waiting for machine to come up
	I0920 22:20:17.537352   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.537853   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.537884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.537804   66497 retry.go:31] will retry after 376.483945ms: waiting for machine to come up
	I0920 22:20:17.916227   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.916679   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.916706   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.916646   66497 retry.go:31] will retry after 471.842767ms: waiting for machine to come up
	I0920 22:20:18.390326   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:18.390817   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:18.390862   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:18.390788   66497 retry.go:31] will retry after 897.259517ms: waiting for machine to come up
	I0920 22:20:19.289834   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:19.290347   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:19.290376   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:19.290289   66497 retry.go:31] will retry after 1.172691106s: waiting for machine to come up
	I0920 22:20:20.464050   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:20.464487   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:20.464509   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:20.464443   66497 retry.go:31] will retry after 1.205371228s: waiting for machine to come up
	I0920 22:20:21.671904   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:21.672382   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:21.672412   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:21.672336   66497 retry.go:31] will retry after 1.261816036s: waiting for machine to come up
	I0920 22:20:22.935576   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:22.936165   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:22.936194   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:22.936101   66497 retry.go:31] will retry after 2.160717711s: waiting for machine to come up
	I0920 22:20:25.098788   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:25.099242   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:25.099264   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:25.099201   66497 retry.go:31] will retry after 2.399420445s: waiting for machine to come up
	I0920 22:20:27.500734   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:27.501210   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:27.501250   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:27.501187   66497 retry.go:31] will retry after 2.779302945s: waiting for machine to come up
	I0920 22:20:30.282179   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:30.282663   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:30.282693   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:30.282632   66497 retry.go:31] will retry after 3.57604454s: waiting for machine to come up
	I0920 22:20:33.859962   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860480   65164 main.go:141] libmachine: (old-k8s-version-593412) Found IP for machine: 192.168.61.2
	I0920 22:20:33.860510   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has current primary IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860519   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserving static IP address...
	I0920 22:20:33.860912   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.860939   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | skip adding static IP to network mk-old-k8s-version-593412 - found existing host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"}
	I0920 22:20:33.860953   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserved static IP address: 192.168.61.2
	I0920 22:20:33.860970   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting for SSH to be available...
	I0920 22:20:33.860982   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Getting to WaitForSSH function...
	I0920 22:20:33.863316   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863665   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.863690   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863814   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH client type: external
	I0920 22:20:33.863852   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa (-rw-------)
	I0920 22:20:33.863884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:33.863898   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | About to run SSH command:
	I0920 22:20:33.863916   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | exit 0
	I0920 22:20:33.987115   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:33.987448   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetConfigRaw
	I0920 22:20:33.988170   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:33.990787   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991200   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.991232   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991450   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:20:33.991659   65164 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:33.991677   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:33.991880   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:33.994313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994683   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.994715   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994911   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:33.995094   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995194   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995344   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:33.995459   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:33.995641   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:33.995651   65164 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:34.104874   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:34.104900   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105125   65164 buildroot.go:166] provisioning hostname "old-k8s-version-593412"
	I0920 22:20:34.105140   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105271   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.108213   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108564   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.108595   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108754   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.108946   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109088   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109218   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.109384   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.109607   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.109624   65164 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-593412 && echo "old-k8s-version-593412" | sudo tee /etc/hostname
	I0920 22:20:34.236515   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-593412
	
	I0920 22:20:34.236547   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.239440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.239825   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239983   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.240211   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240372   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240534   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.240753   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.240978   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.241002   65164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-593412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-593412/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-593412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:34.364324   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:34.364351   65164 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:34.364401   65164 buildroot.go:174] setting up certificates
	I0920 22:20:34.364413   65164 provision.go:84] configureAuth start
	I0920 22:20:34.364426   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.364641   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:34.367650   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368065   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.368123   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368260   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.371062   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.371470   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371621   65164 provision.go:143] copyHostCerts
	I0920 22:20:34.371684   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:34.371699   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:34.371756   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:34.371859   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:34.371869   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:34.371889   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:34.371941   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:34.371949   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:34.371966   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:34.372024   65164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-593412 san=[127.0.0.1 192.168.61.2 localhost minikube old-k8s-version-593412]
	I0920 22:20:34.456453   65164 provision.go:177] copyRemoteCerts
	I0920 22:20:34.456512   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:34.456538   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.459471   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459810   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.459871   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459996   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.460225   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.460390   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.460538   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.547105   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:34.576231   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 22:20:34.603240   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 22:20:34.629295   65164 provision.go:87] duration metric: took 264.869688ms to configureAuth
	I0920 22:20:34.629320   65164 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:34.629521   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:20:34.629601   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.632577   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633006   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.633035   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633191   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.633391   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633552   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633682   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.633865   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.634029   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.634043   65164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:34.870369   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:34.870393   65164 machine.go:96] duration metric: took 878.72152ms to provisionDockerMachine
	I0920 22:20:34.870404   65164 start.go:293] postStartSetup for "old-k8s-version-593412" (driver="kvm2")
	I0920 22:20:34.870414   65164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:34.870446   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:34.870739   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:34.870770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.873626   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874069   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.874098   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874262   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.874477   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.874607   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.874731   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.958510   65164 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:34.962678   65164 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:34.962700   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:34.962769   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:34.962873   65164 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:34.963010   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:34.973114   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:34.997799   65164 start.go:296] duration metric: took 127.379618ms for postStartSetup
	I0920 22:20:34.997837   65164 fix.go:56] duration metric: took 19.87358027s for fixHost
	I0920 22:20:34.997861   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.000420   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000761   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.000786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000953   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.001142   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001332   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001492   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.001694   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:35.001900   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:35.001914   65164 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:35.103956   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870835.076325315
	
	I0920 22:20:35.103980   65164 fix.go:216] guest clock: 1726870835.076325315
	I0920 22:20:35.103991   65164 fix.go:229] Guest: 2024-09-20 22:20:35.076325315 +0000 UTC Remote: 2024-09-20 22:20:34.997841315 +0000 UTC m=+209.171100575 (delta=78.484ms)
	I0920 22:20:35.104021   65164 fix.go:200] guest clock delta is within tolerance: 78.484ms
	I0920 22:20:35.104031   65164 start.go:83] releasing machines lock for "old-k8s-version-593412", held for 19.979809523s
	I0920 22:20:35.104060   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.104327   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:35.107075   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107426   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.107463   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107619   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108087   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108221   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108323   65164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:35.108377   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.108430   65164 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:35.108455   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.111005   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111107   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111377   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111413   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111456   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111584   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111638   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111760   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111795   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111939   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.111943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.189592   65164 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:35.213527   65164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:35.364081   65164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:35.370551   65164 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:35.370621   65164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:35.388107   65164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:35.388135   65164 start.go:495] detecting cgroup driver to use...
	I0920 22:20:35.388206   65164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:35.410531   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:35.425842   65164 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:35.425902   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:35.441753   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:35.457160   65164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:35.578452   65164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:35.754567   65164 docker.go:233] disabling docker service ...
	I0920 22:20:35.754653   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:35.772241   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:35.788874   65164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:35.911696   65164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:36.050305   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:36.065694   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:36.085619   65164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 22:20:36.085697   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.096527   65164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:36.096599   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.108150   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.121282   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.133335   65164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:36.145549   65164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:36.156370   65164 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:36.156429   65164 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:36.174821   65164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:36.188579   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:36.355272   65164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:36.460222   65164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:36.460280   65164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:36.466413   65164 start.go:563] Will wait 60s for crictl version
	I0920 22:20:36.466470   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:36.476263   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:36.519969   65164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:36.520061   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.551914   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.584210   65164 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 22:20:36.585309   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:36.588169   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588579   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:36.588608   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588830   65164 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:36.593202   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:36.608192   65164 kubeadm.go:883] updating cluster {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:36.608325   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:20:36.608384   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:36.667481   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:36.667551   65164 ssh_runner.go:195] Run: which lz4
	I0920 22:20:36.672136   65164 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:20:36.677043   65164 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:20:36.677075   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 22:20:38.446678   65164 crio.go:462] duration metric: took 1.774616281s to copy over tarball
	I0920 22:20:38.446757   65164 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:41.624103   65164 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177314792s)
	I0920 22:20:41.624140   65164 crio.go:469] duration metric: took 3.177430277s to extract the tarball
	I0920 22:20:41.624150   65164 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:41.682195   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:41.728925   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:41.728954   65164 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:20:41.729037   65164 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.729082   65164 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 22:20:41.729130   65164 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.729039   65164 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.729053   65164 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.729064   65164 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731145   65164 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.731168   65164 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 22:20:41.731417   65164 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731618   65164 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.731649   65164 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.731764   65164 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.731827   65164 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.731883   65164 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.894612   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.895771   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.903674   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.909467   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.910830   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.926607   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.943477   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 22:20:42.011191   65164 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 22:20:42.011242   65164 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.011318   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.050009   65164 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 22:20:42.050293   65164 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.050375   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.093205   65164 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 22:20:42.093259   65164 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.093317   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106609   65164 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 22:20:42.106652   65164 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.106701   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106747   65164 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 22:20:42.106832   65164 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.106881   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106699   65164 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 22:20:42.106936   65164 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.107028   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117272   65164 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 22:20:42.117381   65164 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 22:20:42.117404   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.117463   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.117496   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.117427   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117331   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.117556   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.117617   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.261915   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.262025   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.261930   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.262132   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.262186   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.262224   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.262299   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.416018   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.416073   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.416134   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.437099   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.437281   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.437389   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.437543   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.498530   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:42.617564   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 22:20:42.617565   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 22:20:42.617665   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.617767   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 22:20:42.629455   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 22:20:42.629553   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 22:20:42.629638   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 22:20:42.769494   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 22:20:42.769571   65164 cache_images.go:92] duration metric: took 1.040600304s to LoadCachedImages
	W0920 22:20:42.769670   65164 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0920 22:20:42.769690   65164 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.20.0 crio true true} ...
	I0920 22:20:42.769834   65164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-593412 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:42.769930   65164 ssh_runner.go:195] Run: crio config
	I0920 22:20:42.826657   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:20:42.826684   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:42.826695   65164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:42.826713   65164 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-593412 NodeName:old-k8s-version-593412 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 22:20:42.826912   65164 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-593412"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:42.827017   65164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 22:20:42.836906   65164 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:42.836972   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:42.846776   65164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0920 22:20:42.867946   65164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:42.888646   65164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0920 22:20:42.908587   65164 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:42.913022   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:42.927492   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:43.074355   65164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:43.093712   65164 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412 for IP: 192.168.61.2
	I0920 22:20:43.093747   65164 certs.go:194] generating shared ca certs ...
	I0920 22:20:43.093768   65164 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.093938   65164 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:43.094003   65164 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:43.094016   65164 certs.go:256] generating profile certs ...
	I0920 22:20:43.094116   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.key
	I0920 22:20:43.094174   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key.2074c1da
	I0920 22:20:43.094287   65164 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key
	I0920 22:20:43.094440   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:43.094482   65164 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:43.094503   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:43.094532   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:43.094556   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:43.094575   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:43.094615   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:43.095441   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:43.139315   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:43.184809   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:43.224959   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:43.261692   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 22:20:43.304910   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:20:43.350211   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:43.383294   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:43.419400   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:43.450334   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:43.483749   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:43.514957   65164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:43.539734   65164 ssh_runner.go:195] Run: openssl version
	I0920 22:20:43.548836   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:43.564773   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571841   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571904   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.580432   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:43.596782   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:43.613747   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620172   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620241   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.628316   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:43.644084   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:43.656062   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661620   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661687   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.668857   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:43.681747   65164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:43.686842   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:43.694119   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:43.701308   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:43.710152   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:43.718080   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:43.725573   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:43.733563   65164 kubeadm.go:392] StartCluster: {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:43.733676   65164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:43.733753   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.779321   65164 cri.go:89] found id: ""
	I0920 22:20:43.779414   65164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:43.792290   65164 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:43.792308   65164 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:43.792354   65164 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:43.804754   65164 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:43.805793   65164 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-593412" does not appear in /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:20:43.807347   65164 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-9614/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-593412" cluster setting kubeconfig missing "old-k8s-version-593412" context setting]
	I0920 22:20:43.809671   65164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.812323   65164 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:43.824013   65164 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I0920 22:20:43.824057   65164 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:43.824072   65164 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:43.824154   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.866570   65164 cri.go:89] found id: ""
	I0920 22:20:43.866647   65164 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:43.885113   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:43.895941   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:43.895964   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:43.896070   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:20:43.906350   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:43.906427   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:43.917506   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:20:43.927802   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:43.927873   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:43.938454   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.949394   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:43.949459   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.960239   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:20:43.971275   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:43.971343   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:43.981773   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:43.992415   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:44.128979   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.239452   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.110430813s)
	I0920 22:20:45.239495   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.508589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.628021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.731122   65164 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:45.731219   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:46.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:46.732091   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.232177   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.731967   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.231957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.731560   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.232211   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.731851   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.732174   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:51.231352   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:51.731345   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.231745   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.731622   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.231749   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.231457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.732036   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.232322   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.732142   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:56.231551   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:56.732184   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.231519   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.732306   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.231766   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.232250   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.731295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.732178   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:01.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:01.731557   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.232222   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.732161   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.231472   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.732256   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.231599   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.731956   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.231815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.731307   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:06.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:06.732185   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.232008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.732202   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.232026   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.732223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.231619   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.232303   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.731940   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:11.231381   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:11.731874   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.232048   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.732168   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.232175   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.732162   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.231648   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.732176   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.232078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.732039   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.731672   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.232244   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.731640   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.231400   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.231727   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.731713   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.231984   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.732224   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:21.232197   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:21.732280   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.232223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.731989   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.231835   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.731412   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.232159   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.731357   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.731688   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:26.232179   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:26.731338   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.232243   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.731637   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.231903   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.732246   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.231673   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.732195   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.231646   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.731750   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:31.232139   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:31.732146   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.231399   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.231350   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.731518   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.231301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.731329   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.231639   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.731944   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:36.231341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:36.732138   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.732107   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.232187   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.231592   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.732193   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.232273   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.732121   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:41.231524   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:41.731282   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.232204   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.732205   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.232173   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.732209   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.231528   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.732008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.231572   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.731279   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:45.731374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:45.771664   65164 cri.go:89] found id: ""
	I0920 22:21:45.771685   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.771692   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:45.771697   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:45.771753   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:45.805953   65164 cri.go:89] found id: ""
	I0920 22:21:45.805983   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.805994   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:45.806001   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:45.806070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:45.841111   65164 cri.go:89] found id: ""
	I0920 22:21:45.841140   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.841152   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:45.841159   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:45.841217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:45.877729   65164 cri.go:89] found id: ""
	I0920 22:21:45.877750   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.877758   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:45.877764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:45.877814   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:45.919400   65164 cri.go:89] found id: ""
	I0920 22:21:45.919429   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.919440   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:45.919447   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:45.919504   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:45.957169   65164 cri.go:89] found id: ""
	I0920 22:21:45.957199   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.957220   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:45.957233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:45.957289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:46.001079   65164 cri.go:89] found id: ""
	I0920 22:21:46.001110   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.001121   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:46.001128   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:46.001194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:46.037774   65164 cri.go:89] found id: ""
	I0920 22:21:46.037806   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.037818   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:46.037830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:46.037843   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:46.079990   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:46.080016   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:46.129685   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:46.129718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:46.144045   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:46.144078   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:46.282272   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:46.282303   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:46.282318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:48.855633   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:48.871768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:48.871829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:48.911870   65164 cri.go:89] found id: ""
	I0920 22:21:48.911901   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.911913   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:48.911923   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:48.911991   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:48.950456   65164 cri.go:89] found id: ""
	I0920 22:21:48.950481   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.950492   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:48.950500   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:48.950562   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:48.988787   65164 cri.go:89] found id: ""
	I0920 22:21:48.988817   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.988827   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:48.988835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:48.988883   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:49.028550   65164 cri.go:89] found id: ""
	I0920 22:21:49.028580   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.028590   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:49.028600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:49.028659   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:49.066926   65164 cri.go:89] found id: ""
	I0920 22:21:49.066951   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.066977   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:49.066986   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:49.067047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:49.106777   65164 cri.go:89] found id: ""
	I0920 22:21:49.106807   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.106816   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:49.106822   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:49.106879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:49.143823   65164 cri.go:89] found id: ""
	I0920 22:21:49.143856   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.143867   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:49.143875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:49.143935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:49.181471   65164 cri.go:89] found id: ""
	I0920 22:21:49.181496   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.181504   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:49.181512   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:49.181523   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:49.266480   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:49.266517   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:49.326247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:49.326273   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:49.387982   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:49.388022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:49.404125   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:49.404157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:49.485426   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:51.985703   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:51.999662   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:51.999723   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:52.035758   65164 cri.go:89] found id: ""
	I0920 22:21:52.035790   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.035800   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:52.035808   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:52.035865   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:52.082846   65164 cri.go:89] found id: ""
	I0920 22:21:52.082869   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.082877   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:52.082883   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:52.082954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:52.119258   65164 cri.go:89] found id: ""
	I0920 22:21:52.119299   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.119311   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:52.119319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:52.119380   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:52.155609   65164 cri.go:89] found id: ""
	I0920 22:21:52.155635   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.155645   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:52.155652   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:52.155710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:52.191493   65164 cri.go:89] found id: ""
	I0920 22:21:52.191519   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.191527   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:52.191534   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:52.191608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:52.227088   65164 cri.go:89] found id: ""
	I0920 22:21:52.227121   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.227131   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:52.227138   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:52.227204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:52.264503   65164 cri.go:89] found id: ""
	I0920 22:21:52.264529   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.264538   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:52.264545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:52.264611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:52.302158   65164 cri.go:89] found id: ""
	I0920 22:21:52.302185   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.302193   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:52.302202   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:52.302217   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:52.356130   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:52.356168   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:52.373072   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:52.373096   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:52.446415   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:52.446437   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:52.446454   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:52.527964   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:52.528009   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.077589   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:55.091560   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:55.091615   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:55.127894   65164 cri.go:89] found id: ""
	I0920 22:21:55.127922   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.127949   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:55.127971   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:55.128030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:55.162522   65164 cri.go:89] found id: ""
	I0920 22:21:55.162546   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.162554   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:55.162562   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:55.162617   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:55.199445   65164 cri.go:89] found id: ""
	I0920 22:21:55.199470   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.199480   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:55.199488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:55.199546   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:55.239316   65164 cri.go:89] found id: ""
	I0920 22:21:55.239340   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.239350   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:55.239357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:55.239418   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:55.278165   65164 cri.go:89] found id: ""
	I0920 22:21:55.278194   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.278204   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:55.278212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:55.278270   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:55.313143   65164 cri.go:89] found id: ""
	I0920 22:21:55.313173   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.313184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:55.313192   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:55.313261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:55.352661   65164 cri.go:89] found id: ""
	I0920 22:21:55.352691   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.352699   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:55.352704   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:55.352751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:55.390108   65164 cri.go:89] found id: ""
	I0920 22:21:55.390147   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.390158   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:55.390169   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:55.390184   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:55.403652   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:55.403680   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:55.490456   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:55.490477   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:55.490488   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:55.573164   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:55.573202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.615158   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:55.615188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:58.165907   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:58.180329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:58.180397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:58.215920   65164 cri.go:89] found id: ""
	I0920 22:21:58.215946   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.215954   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:58.215973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:58.216030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:58.253178   65164 cri.go:89] found id: ""
	I0920 22:21:58.253211   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.253222   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:58.253229   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:58.253287   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:58.289345   65164 cri.go:89] found id: ""
	I0920 22:21:58.289373   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.289384   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:58.289391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:58.289451   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:58.326430   65164 cri.go:89] found id: ""
	I0920 22:21:58.326460   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.326468   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:58.326476   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:58.326535   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:58.362723   65164 cri.go:89] found id: ""
	I0920 22:21:58.362749   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.362761   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:58.362769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:58.362824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:58.398827   65164 cri.go:89] found id: ""
	I0920 22:21:58.398857   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.398867   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:58.398875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:58.398928   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:58.444833   65164 cri.go:89] found id: ""
	I0920 22:21:58.444864   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.444875   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:58.444883   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:58.444948   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:58.482692   65164 cri.go:89] found id: ""
	I0920 22:21:58.482723   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.482735   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:58.482745   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:58.482756   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:58.567138   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:58.567172   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:58.606392   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:58.606421   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:58.658568   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:58.658605   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:58.672830   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:58.672855   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:58.742602   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:01.243532   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:01.257408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:01.257493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:01.295259   65164 cri.go:89] found id: ""
	I0920 22:22:01.295284   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.295292   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:01.295297   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:01.295348   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:01.334119   65164 cri.go:89] found id: ""
	I0920 22:22:01.334145   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.334157   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:01.334164   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:01.334222   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:01.376955   65164 cri.go:89] found id: ""
	I0920 22:22:01.376985   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.376997   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:01.377005   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:01.377063   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:01.416680   65164 cri.go:89] found id: ""
	I0920 22:22:01.416703   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.416712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:01.416718   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:01.416783   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:01.455042   65164 cri.go:89] found id: ""
	I0920 22:22:01.455064   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.455072   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:01.455077   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:01.455135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:01.499243   65164 cri.go:89] found id: ""
	I0920 22:22:01.499271   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.499283   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:01.499290   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:01.499347   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:01.535757   65164 cri.go:89] found id: ""
	I0920 22:22:01.535793   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.535804   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:01.535812   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:01.535876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:01.574409   65164 cri.go:89] found id: ""
	I0920 22:22:01.574447   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.574459   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:01.574470   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:01.574485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:01.615629   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:01.615659   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:01.668446   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:01.668481   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:01.683044   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:01.683071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:01.751190   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:01.751210   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:01.751221   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.335881   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:04.351092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:04.351149   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:04.386306   65164 cri.go:89] found id: ""
	I0920 22:22:04.386337   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.386347   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:04.386365   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:04.386424   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:04.419440   65164 cri.go:89] found id: ""
	I0920 22:22:04.419472   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.419502   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:04.419511   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:04.419571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:04.456005   65164 cri.go:89] found id: ""
	I0920 22:22:04.456043   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.456054   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:04.456061   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:04.456125   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:04.490878   65164 cri.go:89] found id: ""
	I0920 22:22:04.490909   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.490919   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:04.490927   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:04.491000   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:04.527312   65164 cri.go:89] found id: ""
	I0920 22:22:04.527342   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.527353   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:04.527361   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:04.527441   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:04.564441   65164 cri.go:89] found id: ""
	I0920 22:22:04.564471   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.564481   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:04.564487   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:04.564542   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:04.598843   65164 cri.go:89] found id: ""
	I0920 22:22:04.598870   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.598878   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:04.598884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:04.598946   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:04.632575   65164 cri.go:89] found id: ""
	I0920 22:22:04.632604   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.632615   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:04.632625   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:04.632640   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:04.683629   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:04.683668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:04.697138   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:04.697169   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:04.768051   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:04.768076   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:04.768087   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.848054   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:04.848102   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:07.396960   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:07.410612   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:07.410681   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:07.450652   65164 cri.go:89] found id: ""
	I0920 22:22:07.450680   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.450688   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:07.450694   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:07.450739   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:07.491547   65164 cri.go:89] found id: ""
	I0920 22:22:07.491577   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.491588   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:07.491595   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:07.491652   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:07.532025   65164 cri.go:89] found id: ""
	I0920 22:22:07.532060   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.532069   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:07.532076   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:07.532135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:07.574139   65164 cri.go:89] found id: ""
	I0920 22:22:07.574175   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.574186   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:07.574194   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:07.574257   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:07.614572   65164 cri.go:89] found id: ""
	I0920 22:22:07.614599   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.614610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:07.614618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:07.614679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:07.651975   65164 cri.go:89] found id: ""
	I0920 22:22:07.652012   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.652022   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:07.652029   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:07.652091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:07.690273   65164 cri.go:89] found id: ""
	I0920 22:22:07.690300   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.690311   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:07.690319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:07.690379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:07.727144   65164 cri.go:89] found id: ""
	I0920 22:22:07.727169   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.727179   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:07.727189   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:07.727204   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:07.784607   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:07.784645   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:07.800082   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:07.800113   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:07.872842   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:07.872870   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:07.872885   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:07.953169   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:07.953205   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:10.496094   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:10.509645   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:10.509711   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:10.544590   65164 cri.go:89] found id: ""
	I0920 22:22:10.544620   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.544628   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:10.544634   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:10.544690   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:10.581062   65164 cri.go:89] found id: ""
	I0920 22:22:10.581091   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.581100   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:10.581106   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:10.581166   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:10.616337   65164 cri.go:89] found id: ""
	I0920 22:22:10.616363   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.616372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:10.616380   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:10.616438   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:10.655747   65164 cri.go:89] found id: ""
	I0920 22:22:10.655769   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.655777   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:10.655784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:10.655832   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:10.689158   65164 cri.go:89] found id: ""
	I0920 22:22:10.689188   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.689199   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:10.689207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:10.689269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:10.724321   65164 cri.go:89] found id: ""
	I0920 22:22:10.724355   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.724369   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:10.724376   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:10.724447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:10.764105   65164 cri.go:89] found id: ""
	I0920 22:22:10.764134   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.764143   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:10.764148   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:10.764217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:10.802620   65164 cri.go:89] found id: ""
	I0920 22:22:10.802650   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.802664   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:10.802674   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:10.802687   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:10.854739   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:10.854773   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:10.869974   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:10.869996   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:10.945364   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:10.945386   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:10.945399   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:11.029379   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:11.029419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:13.576924   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:13.591641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:13.591716   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:13.626683   65164 cri.go:89] found id: ""
	I0920 22:22:13.626710   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.626718   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:13.626724   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:13.626770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:13.666368   65164 cri.go:89] found id: ""
	I0920 22:22:13.666400   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.666408   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:13.666413   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:13.666469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:13.702596   65164 cri.go:89] found id: ""
	I0920 22:22:13.702618   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.702634   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:13.702642   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:13.702706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:13.736359   65164 cri.go:89] found id: ""
	I0920 22:22:13.736382   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.736390   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:13.736396   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:13.736446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:13.770584   65164 cri.go:89] found id: ""
	I0920 22:22:13.770615   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.770626   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:13.770633   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:13.770693   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:13.808637   65164 cri.go:89] found id: ""
	I0920 22:22:13.808668   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.808680   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:13.808688   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:13.808748   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:13.844429   65164 cri.go:89] found id: ""
	I0920 22:22:13.844476   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.844489   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:13.844497   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:13.844563   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:13.882730   65164 cri.go:89] found id: ""
	I0920 22:22:13.882759   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.882770   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:13.882778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:13.882790   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:13.939998   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:13.940028   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:13.954014   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:13.954043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:14.031991   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:14.032016   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:14.032030   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:14.109378   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:14.109413   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:16.646761   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:16.660458   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:16.660522   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:16.695512   65164 cri.go:89] found id: ""
	I0920 22:22:16.695537   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.695545   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:16.695552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:16.695612   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:16.740479   65164 cri.go:89] found id: ""
	I0920 22:22:16.740508   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.740518   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:16.740529   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:16.740589   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:16.777259   65164 cri.go:89] found id: ""
	I0920 22:22:16.777291   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.777302   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:16.777309   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:16.777355   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:16.833157   65164 cri.go:89] found id: ""
	I0920 22:22:16.833187   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.833204   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:16.833212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:16.833272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:16.868589   65164 cri.go:89] found id: ""
	I0920 22:22:16.868618   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.868627   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:16.868635   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:16.868694   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:16.907229   65164 cri.go:89] found id: ""
	I0920 22:22:16.907257   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.907269   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:16.907277   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:16.907336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:16.945317   65164 cri.go:89] found id: ""
	I0920 22:22:16.945343   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.945351   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:16.945357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:16.945413   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:16.982159   65164 cri.go:89] found id: ""
	I0920 22:22:16.982183   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.982197   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:16.982206   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:16.982218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:17.034369   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:17.034416   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:17.049644   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:17.049677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:17.127816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:17.127840   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:17.127852   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:17.217213   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:17.217248   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:19.766957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:19.781545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:19.781627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:19.819616   65164 cri.go:89] found id: ""
	I0920 22:22:19.819642   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.819650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:19.819656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:19.819717   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:19.860983   65164 cri.go:89] found id: ""
	I0920 22:22:19.861009   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.861020   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:19.861027   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:19.861083   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:19.904051   65164 cri.go:89] found id: ""
	I0920 22:22:19.904075   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.904083   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:19.904088   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:19.904153   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:19.940246   65164 cri.go:89] found id: ""
	I0920 22:22:19.940273   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.940280   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:19.940286   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:19.940332   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:19.980380   65164 cri.go:89] found id: ""
	I0920 22:22:19.980404   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.980412   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:19.980418   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:19.980496   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:20.019603   65164 cri.go:89] found id: ""
	I0920 22:22:20.019625   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.019633   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:20.019639   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:20.019689   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:20.060667   65164 cri.go:89] found id: ""
	I0920 22:22:20.060696   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.060704   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:20.060709   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:20.060766   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:20.104432   65164 cri.go:89] found id: ""
	I0920 22:22:20.104464   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.104475   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:20.104486   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:20.104500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:20.148433   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:20.148466   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:20.202370   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:20.202407   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:20.216482   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:20.216515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:20.297053   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:20.297071   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:20.297086   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:22.879569   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:22.893624   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:22.893683   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:22.937850   65164 cri.go:89] found id: ""
	I0920 22:22:22.937878   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.937886   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:22.937896   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:22.937944   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:22.974939   65164 cri.go:89] found id: ""
	I0920 22:22:22.974971   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.974983   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:22.974990   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:22.975057   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:23.013689   65164 cri.go:89] found id: ""
	I0920 22:22:23.013723   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.013735   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:23.013742   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:23.013802   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:23.052092   65164 cri.go:89] found id: ""
	I0920 22:22:23.052123   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.052134   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:23.052141   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:23.052201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:23.092204   65164 cri.go:89] found id: ""
	I0920 22:22:23.092231   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.092248   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:23.092255   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:23.092317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:23.146265   65164 cri.go:89] found id: ""
	I0920 22:22:23.146294   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.146304   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:23.146312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:23.146373   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:23.183802   65164 cri.go:89] found id: ""
	I0920 22:22:23.183832   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.183843   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:23.183865   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:23.183932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:23.219604   65164 cri.go:89] found id: ""
	I0920 22:22:23.219633   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.219643   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:23.219654   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:23.219671   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:23.268138   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:23.268171   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:23.285309   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:23.285343   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:23.362940   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:23.362985   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:23.363006   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:23.450369   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:23.450411   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:26.001038   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:26.016893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:26.016956   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:26.053611   65164 cri.go:89] found id: ""
	I0920 22:22:26.053641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.053653   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:26.053661   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:26.053710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:26.093966   65164 cri.go:89] found id: ""
	I0920 22:22:26.093994   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.094005   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:26.094012   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:26.094077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:26.134321   65164 cri.go:89] found id: ""
	I0920 22:22:26.134360   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.134372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:26.134379   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:26.134442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:26.171597   65164 cri.go:89] found id: ""
	I0920 22:22:26.171641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.171651   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:26.171659   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:26.171730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:26.207751   65164 cri.go:89] found id: ""
	I0920 22:22:26.207778   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.207787   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:26.207793   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:26.207845   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:26.249174   65164 cri.go:89] found id: ""
	I0920 22:22:26.249201   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.249217   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:26.249225   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:26.249286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:26.284627   65164 cri.go:89] found id: ""
	I0920 22:22:26.284656   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.284666   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:26.284673   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:26.284729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:26.323942   65164 cri.go:89] found id: ""
	I0920 22:22:26.323968   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.323980   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:26.323991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:26.324005   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:26.379698   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:26.379737   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:26.394553   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:26.394584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:26.472301   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:26.472328   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:26.472354   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:26.549464   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:26.549509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.091838   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:29.115120   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:29.115198   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:29.161609   65164 cri.go:89] found id: ""
	I0920 22:22:29.161632   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.161640   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:29.161646   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:29.161709   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:29.197886   65164 cri.go:89] found id: ""
	I0920 22:22:29.197912   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.197920   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:29.197926   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:29.197978   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:29.236953   65164 cri.go:89] found id: ""
	I0920 22:22:29.236980   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.236991   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:29.236999   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:29.237059   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:29.273573   65164 cri.go:89] found id: ""
	I0920 22:22:29.273603   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.273614   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:29.273622   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:29.273679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:29.308752   65164 cri.go:89] found id: ""
	I0920 22:22:29.308776   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.308784   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:29.308789   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:29.308847   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:29.348578   65164 cri.go:89] found id: ""
	I0920 22:22:29.348608   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.348620   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:29.348627   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:29.348685   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:29.385184   65164 cri.go:89] found id: ""
	I0920 22:22:29.385225   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.385237   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:29.385246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:29.385308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:29.421935   65164 cri.go:89] found id: ""
	I0920 22:22:29.421968   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.421979   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:29.421990   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:29.422012   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:29.438574   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:29.438606   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:29.515334   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:29.515362   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:29.515376   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:29.602411   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:29.602451   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.646284   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:29.646320   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:32.196699   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:32.211260   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:32.211338   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:32.248007   65164 cri.go:89] found id: ""
	I0920 22:22:32.248040   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.248052   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:32.248060   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:32.248122   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:32.285451   65164 cri.go:89] found id: ""
	I0920 22:22:32.285481   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.285490   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:32.285496   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:32.285548   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:32.321721   65164 cri.go:89] found id: ""
	I0920 22:22:32.321746   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.321756   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:32.321768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:32.321827   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:32.363669   65164 cri.go:89] found id: ""
	I0920 22:22:32.363698   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.363708   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:32.363717   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:32.363770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:32.402681   65164 cri.go:89] found id: ""
	I0920 22:22:32.402721   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.402729   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:32.402735   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:32.402786   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:32.445648   65164 cri.go:89] found id: ""
	I0920 22:22:32.445674   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.445684   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:32.445691   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:32.445750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:32.482291   65164 cri.go:89] found id: ""
	I0920 22:22:32.482315   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.482330   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:32.482337   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:32.482397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:32.523095   65164 cri.go:89] found id: ""
	I0920 22:22:32.523124   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.523134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:32.523145   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:32.523160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:32.537459   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:32.537489   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:32.614828   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:32.614853   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:32.614867   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:32.695690   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:32.695728   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:32.739620   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:32.739658   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:35.296323   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:35.312341   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:35.312417   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:35.349934   65164 cri.go:89] found id: ""
	I0920 22:22:35.349962   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.349972   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:35.349980   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:35.350052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:35.385427   65164 cri.go:89] found id: ""
	I0920 22:22:35.385457   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.385468   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:35.385475   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:35.385538   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:35.422118   65164 cri.go:89] found id: ""
	I0920 22:22:35.422146   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.422157   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:35.422165   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:35.422230   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:35.458552   65164 cri.go:89] found id: ""
	I0920 22:22:35.458573   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.458580   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:35.458586   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:35.458639   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:35.493575   65164 cri.go:89] found id: ""
	I0920 22:22:35.493601   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.493610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:35.493615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:35.493671   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:35.530316   65164 cri.go:89] found id: ""
	I0920 22:22:35.530342   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.530350   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:35.530356   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:35.530411   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:35.568247   65164 cri.go:89] found id: ""
	I0920 22:22:35.568279   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.568289   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:35.568303   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:35.568367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:35.605207   65164 cri.go:89] found id: ""
	I0920 22:22:35.605234   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.605242   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:35.605255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:35.605267   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:35.619771   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:35.619797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:35.699917   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:35.699939   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:35.699950   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:35.785310   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:35.785346   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:35.849112   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:35.849147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:38.405695   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:38.419499   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:38.419580   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:38.461618   65164 cri.go:89] found id: ""
	I0920 22:22:38.461642   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.461650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:38.461656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:38.461713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:38.504837   65164 cri.go:89] found id: ""
	I0920 22:22:38.504859   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.504867   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:38.504872   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:38.504923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:38.539399   65164 cri.go:89] found id: ""
	I0920 22:22:38.539434   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.539446   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:38.539453   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:38.539518   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:38.576129   65164 cri.go:89] found id: ""
	I0920 22:22:38.576155   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.576165   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:38.576173   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:38.576233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:38.611383   65164 cri.go:89] found id: ""
	I0920 22:22:38.611411   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.611423   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:38.611430   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:38.611491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:38.650553   65164 cri.go:89] found id: ""
	I0920 22:22:38.650583   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.650594   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:38.650602   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:38.650668   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:38.686734   65164 cri.go:89] found id: ""
	I0920 22:22:38.686756   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.686764   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:38.686769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:38.686829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:38.726268   65164 cri.go:89] found id: ""
	I0920 22:22:38.726301   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.726312   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:38.726323   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:38.726336   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:38.801995   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:38.802042   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:38.843775   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:38.843811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:38.897874   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:38.897909   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:38.912111   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:38.912136   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:38.989021   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:41.489341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:41.515667   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:41.515724   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:41.567019   65164 cri.go:89] found id: ""
	I0920 22:22:41.567047   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.567057   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:41.567066   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:41.567128   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:41.625316   65164 cri.go:89] found id: ""
	I0920 22:22:41.625347   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.625357   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:41.625371   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:41.625436   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:41.663420   65164 cri.go:89] found id: ""
	I0920 22:22:41.663448   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.663459   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:41.663467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:41.663529   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:41.701639   65164 cri.go:89] found id: ""
	I0920 22:22:41.701670   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.701682   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:41.701689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:41.701751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:41.739740   65164 cri.go:89] found id: ""
	I0920 22:22:41.739768   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.739778   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:41.739786   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:41.739843   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:41.781365   65164 cri.go:89] found id: ""
	I0920 22:22:41.781390   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.781401   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:41.781408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:41.781467   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:41.818708   65164 cri.go:89] found id: ""
	I0920 22:22:41.818732   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.818740   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:41.818746   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:41.818793   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:41.854193   65164 cri.go:89] found id: ""
	I0920 22:22:41.854226   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.854237   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:41.854249   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:41.854266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:41.923678   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:41.923701   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:41.923716   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:42.006009   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:42.006045   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:42.044745   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:42.044777   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:42.099462   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:42.099500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:44.614787   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:44.629100   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:44.629178   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:44.672347   65164 cri.go:89] found id: ""
	I0920 22:22:44.672381   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.672392   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:44.672401   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:44.672525   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:44.717617   65164 cri.go:89] found id: ""
	I0920 22:22:44.717646   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.717657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:44.717664   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:44.717730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:44.753035   65164 cri.go:89] found id: ""
	I0920 22:22:44.753062   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.753073   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:44.753092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:44.753165   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:44.790745   65164 cri.go:89] found id: ""
	I0920 22:22:44.790774   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.790785   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:44.790792   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:44.790859   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:44.828604   65164 cri.go:89] found id: ""
	I0920 22:22:44.828630   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.828640   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:44.828647   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:44.828706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:44.865982   65164 cri.go:89] found id: ""
	I0920 22:22:44.866008   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.866019   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:44.866026   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:44.866077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:44.903473   65164 cri.go:89] found id: ""
	I0920 22:22:44.903499   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.903508   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:44.903514   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:44.903571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:44.943481   65164 cri.go:89] found id: ""
	I0920 22:22:44.943509   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.943521   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:44.943530   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:44.943543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:44.996783   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:44.996823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:45.011531   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:45.011560   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:45.087225   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:45.087252   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:45.087266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:45.166460   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:45.166500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:47.709426   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:47.723507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:47.723590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:47.758203   65164 cri.go:89] found id: ""
	I0920 22:22:47.758228   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.758238   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:47.758246   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:47.758304   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:47.796098   65164 cri.go:89] found id: ""
	I0920 22:22:47.796129   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.796140   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:47.796148   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:47.796210   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:47.831860   65164 cri.go:89] found id: ""
	I0920 22:22:47.831889   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.831899   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:47.831906   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:47.831966   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:47.866241   65164 cri.go:89] found id: ""
	I0920 22:22:47.866266   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.866274   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:47.866281   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:47.866337   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:47.905029   65164 cri.go:89] found id: ""
	I0920 22:22:47.905055   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.905063   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:47.905073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:47.905124   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:47.942290   65164 cri.go:89] found id: ""
	I0920 22:22:47.942322   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.942332   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:47.942340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:47.942400   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:47.977901   65164 cri.go:89] found id: ""
	I0920 22:22:47.977930   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.977939   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:47.977946   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:47.978012   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:48.014682   65164 cri.go:89] found id: ""
	I0920 22:22:48.014715   65164 logs.go:276] 0 containers: []
	W0920 22:22:48.014726   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:48.014736   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:48.014749   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:48.067116   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:48.067160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:48.082926   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:48.082980   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:48.158209   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:48.158231   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:48.158243   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:48.239760   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:48.239866   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:50.788520   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:50.802410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:50.802485   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:50.844263   65164 cri.go:89] found id: ""
	I0920 22:22:50.844291   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.844303   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:50.844311   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:50.844367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:50.880271   65164 cri.go:89] found id: ""
	I0920 22:22:50.880301   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.880311   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:50.880324   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:50.880379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:50.915627   65164 cri.go:89] found id: ""
	I0920 22:22:50.915653   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.915663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:50.915671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:50.915725   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:50.951779   65164 cri.go:89] found id: ""
	I0920 22:22:50.951804   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.951823   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:50.951831   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:50.951879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:50.989174   65164 cri.go:89] found id: ""
	I0920 22:22:50.989201   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.989212   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:50.989219   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:50.989293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:51.024049   65164 cri.go:89] found id: ""
	I0920 22:22:51.024084   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.024094   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:51.024106   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:51.024167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:51.060837   65164 cri.go:89] found id: ""
	I0920 22:22:51.060865   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.060877   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:51.060884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:51.060934   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:51.099248   65164 cri.go:89] found id: ""
	I0920 22:22:51.099272   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.099285   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:51.099295   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:51.099308   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:51.148388   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:51.148420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:51.162735   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:51.162763   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:51.239020   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:51.239048   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:51.239063   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:51.318893   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:51.318934   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:53.860664   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:53.874434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:53.874491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:53.909449   65164 cri.go:89] found id: ""
	I0920 22:22:53.909475   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.909485   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:53.909493   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:53.909550   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:53.944367   65164 cri.go:89] found id: ""
	I0920 22:22:53.944395   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.944406   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:53.944414   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:53.944469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:53.980994   65164 cri.go:89] found id: ""
	I0920 22:22:53.981032   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.981043   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:53.981051   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:53.981110   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:54.018456   65164 cri.go:89] found id: ""
	I0920 22:22:54.018494   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.018506   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:54.018515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:54.018574   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:54.053763   65164 cri.go:89] found id: ""
	I0920 22:22:54.053793   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.053805   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:54.053813   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:54.053871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:54.091847   65164 cri.go:89] found id: ""
	I0920 22:22:54.091871   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.091883   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:54.091890   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:54.091951   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:54.126844   65164 cri.go:89] found id: ""
	I0920 22:22:54.126875   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.126885   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:54.126893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:54.126949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:54.168967   65164 cri.go:89] found id: ""
	I0920 22:22:54.168999   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.169010   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:54.169022   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:54.169043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:54.237771   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:54.237791   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:54.237806   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:54.319629   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:54.319668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:54.364470   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:54.364504   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:54.418798   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:54.418835   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:56.935104   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:56.948472   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:56.948545   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:56.983338   65164 cri.go:89] found id: ""
	I0920 22:22:56.983365   65164 logs.go:276] 0 containers: []
	W0920 22:22:56.983373   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:56.983379   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:56.983425   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:57.021781   65164 cri.go:89] found id: ""
	I0920 22:22:57.021809   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.021819   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:57.021825   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:57.021871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:57.057191   65164 cri.go:89] found id: ""
	I0920 22:22:57.057231   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.057239   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:57.057246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:57.057294   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:57.092567   65164 cri.go:89] found id: ""
	I0920 22:22:57.092595   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.092607   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:57.092615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:57.092669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:57.128098   65164 cri.go:89] found id: ""
	I0920 22:22:57.128125   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.128136   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:57.128143   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:57.128205   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:57.163001   65164 cri.go:89] found id: ""
	I0920 22:22:57.163029   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.163040   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:57.163048   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:57.163105   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:57.199392   65164 cri.go:89] found id: ""
	I0920 22:22:57.199424   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.199434   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:57.199442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:57.199491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:57.235132   65164 cri.go:89] found id: ""
	I0920 22:22:57.235155   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.235162   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:57.235170   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:57.235181   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:57.287112   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:57.287148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:57.301293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:57.301318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:57.379895   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:57.379920   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:57.379935   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:57.466022   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:57.466059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.006738   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:00.019871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:00.019949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:00.054127   65164 cri.go:89] found id: ""
	I0920 22:23:00.054155   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.054162   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:00.054168   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:00.054228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:00.089755   65164 cri.go:89] found id: ""
	I0920 22:23:00.089788   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.089800   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:00.089807   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:00.089867   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:00.125466   65164 cri.go:89] found id: ""
	I0920 22:23:00.125489   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.125497   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:00.125503   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:00.125559   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:00.166147   65164 cri.go:89] found id: ""
	I0920 22:23:00.166171   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.166178   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:00.166187   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:00.166248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:00.202752   65164 cri.go:89] found id: ""
	I0920 22:23:00.202781   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.202793   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:00.202801   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:00.202862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:00.240727   65164 cri.go:89] found id: ""
	I0920 22:23:00.240757   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.240768   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:00.240776   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:00.240833   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:00.277247   65164 cri.go:89] found id: ""
	I0920 22:23:00.277270   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.277278   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:00.277285   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:00.277341   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:00.313098   65164 cri.go:89] found id: ""
	I0920 22:23:00.313123   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.313134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:00.313144   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:00.313160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:00.328326   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:00.328353   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:00.397929   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:00.397961   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:00.397975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:00.479830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:00.479862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.519715   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:00.519745   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:03.073360   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:03.088169   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:03.088236   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:03.129646   65164 cri.go:89] found id: ""
	I0920 22:23:03.129673   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.129682   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:03.129690   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:03.129751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:03.163771   65164 cri.go:89] found id: ""
	I0920 22:23:03.163801   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.163812   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:03.163820   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:03.163876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:03.201253   65164 cri.go:89] found id: ""
	I0920 22:23:03.201291   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.201299   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:03.201317   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:03.201374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:03.239623   65164 cri.go:89] found id: ""
	I0920 22:23:03.239652   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.239663   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:03.239670   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:03.239731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:03.275530   65164 cri.go:89] found id: ""
	I0920 22:23:03.275555   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.275564   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:03.275572   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:03.275631   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:03.312812   65164 cri.go:89] found id: ""
	I0920 22:23:03.312845   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.312855   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:03.312863   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:03.312923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:03.352975   65164 cri.go:89] found id: ""
	I0920 22:23:03.352998   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.353009   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:03.353016   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:03.353070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:03.390815   65164 cri.go:89] found id: ""
	I0920 22:23:03.390846   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.390856   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:03.390867   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:03.390880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:03.478231   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:03.478269   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:03.524991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:03.525024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:03.580114   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:03.580159   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:03.596120   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:03.596147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:03.671302   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:06.171484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:06.186410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:06.186480   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:06.222102   65164 cri.go:89] found id: ""
	I0920 22:23:06.222131   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.222139   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:06.222148   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:06.222206   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:06.257922   65164 cri.go:89] found id: ""
	I0920 22:23:06.257945   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.257953   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:06.257958   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:06.258006   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:06.299149   65164 cri.go:89] found id: ""
	I0920 22:23:06.299176   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.299184   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:06.299190   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:06.299244   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:06.336667   65164 cri.go:89] found id: ""
	I0920 22:23:06.336693   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.336703   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:06.336710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:06.336772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:06.379397   65164 cri.go:89] found id: ""
	I0920 22:23:06.379425   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.379435   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:06.379442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:06.379505   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:06.415945   65164 cri.go:89] found id: ""
	I0920 22:23:06.415976   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.415987   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:06.415993   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:06.416051   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:06.452902   65164 cri.go:89] found id: ""
	I0920 22:23:06.452932   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.452955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:06.452962   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:06.453033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:06.490284   65164 cri.go:89] found id: ""
	I0920 22:23:06.490314   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.490324   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:06.490334   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:06.490351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:06.529838   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:06.529862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:06.582421   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:06.582461   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:06.596504   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:06.596542   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:06.667816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:06.667836   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:06.667851   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.247952   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:09.264316   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:09.264386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:09.300577   65164 cri.go:89] found id: ""
	I0920 22:23:09.300612   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.300621   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:09.300627   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:09.300674   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:09.345945   65164 cri.go:89] found id: ""
	I0920 22:23:09.345972   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.345982   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:09.345991   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:09.346049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:09.385660   65164 cri.go:89] found id: ""
	I0920 22:23:09.385690   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.385700   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:09.385708   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:09.385775   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:09.422286   65164 cri.go:89] found id: ""
	I0920 22:23:09.422321   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.422332   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:09.422338   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:09.422397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:09.458377   65164 cri.go:89] found id: ""
	I0920 22:23:09.458402   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.458410   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:09.458416   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:09.458477   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:09.499470   65164 cri.go:89] found id: ""
	I0920 22:23:09.499498   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.499509   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:09.499517   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:09.499603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:09.535725   65164 cri.go:89] found id: ""
	I0920 22:23:09.535751   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.535759   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:09.535764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:09.535812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:09.575755   65164 cri.go:89] found id: ""
	I0920 22:23:09.575784   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.575795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:09.575807   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:09.575825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:09.590265   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:09.590294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:09.661559   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:09.661585   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:09.661617   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.741099   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:09.741143   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:09.785560   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:09.785589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:12.344085   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:12.358919   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:12.359021   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:12.397182   65164 cri.go:89] found id: ""
	I0920 22:23:12.397220   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.397229   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:12.397235   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:12.397293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:12.435133   65164 cri.go:89] found id: ""
	I0920 22:23:12.435162   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.435169   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:12.435177   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:12.435237   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:12.474391   65164 cri.go:89] found id: ""
	I0920 22:23:12.474416   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.474427   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:12.474434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:12.474493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:12.511992   65164 cri.go:89] found id: ""
	I0920 22:23:12.512015   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.512022   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:12.512028   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:12.512071   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:12.547248   65164 cri.go:89] found id: ""
	I0920 22:23:12.547269   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.547278   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:12.547289   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:12.547336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:12.582721   65164 cri.go:89] found id: ""
	I0920 22:23:12.582747   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.582759   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:12.582766   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:12.582837   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:12.620184   65164 cri.go:89] found id: ""
	I0920 22:23:12.620207   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.620215   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:12.620220   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:12.620272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:12.657416   65164 cri.go:89] found id: ""
	I0920 22:23:12.657441   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.657449   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:12.657458   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:12.657477   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:12.671189   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:12.671218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:12.742115   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:12.742139   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:12.742150   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:12.817551   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:12.817586   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:12.861940   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:12.861970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:15.413615   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:15.427127   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:15.427201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:15.462329   65164 cri.go:89] found id: ""
	I0920 22:23:15.462357   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.462366   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:15.462373   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:15.462430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:15.508791   65164 cri.go:89] found id: ""
	I0920 22:23:15.508816   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.508824   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:15.508834   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:15.508894   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:15.545381   65164 cri.go:89] found id: ""
	I0920 22:23:15.545403   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.545411   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:15.545417   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:15.545475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:15.585401   65164 cri.go:89] found id: ""
	I0920 22:23:15.585422   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.585430   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:15.585436   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:15.585494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:15.631507   65164 cri.go:89] found id: ""
	I0920 22:23:15.631529   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.631542   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:15.631549   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:15.631608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:15.671219   65164 cri.go:89] found id: ""
	I0920 22:23:15.671247   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.671270   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:15.671278   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:15.671333   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:15.711348   65164 cri.go:89] found id: ""
	I0920 22:23:15.711382   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.711393   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:15.711400   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:15.711463   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:15.750582   65164 cri.go:89] found id: ""
	I0920 22:23:15.750614   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.750624   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:15.750635   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:15.750649   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:15.766294   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:15.766321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:15.839079   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:15.839104   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:15.839117   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:15.920123   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:15.920156   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:15.964292   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:15.964321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.517484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:18.531117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:18.531192   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:18.567351   65164 cri.go:89] found id: ""
	I0920 22:23:18.567377   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.567389   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:18.567397   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:18.567447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:18.603319   65164 cri.go:89] found id: ""
	I0920 22:23:18.603344   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.603353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:18.603358   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:18.603408   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:18.639699   65164 cri.go:89] found id: ""
	I0920 22:23:18.639726   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.639737   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:18.639744   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:18.639806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:18.681880   65164 cri.go:89] found id: ""
	I0920 22:23:18.681910   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.681931   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:18.681939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:18.682002   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:18.716589   65164 cri.go:89] found id: ""
	I0920 22:23:18.716619   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.716629   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:18.716637   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:18.716696   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:18.755280   65164 cri.go:89] found id: ""
	I0920 22:23:18.755310   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.755321   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:18.755329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:18.755390   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:18.791915   65164 cri.go:89] found id: ""
	I0920 22:23:18.791944   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.791955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:18.791963   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:18.792033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:18.831544   65164 cri.go:89] found id: ""
	I0920 22:23:18.831568   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.831576   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:18.831584   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:18.831593   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.882024   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:18.882059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:18.898191   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:18.898226   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:18.970613   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:18.970643   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:18.970657   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:19.046112   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:19.046145   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:21.586078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:21.599467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:21.599541   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:21.638708   65164 cri.go:89] found id: ""
	I0920 22:23:21.638733   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.638741   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:21.638747   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:21.638851   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:21.680546   65164 cri.go:89] found id: ""
	I0920 22:23:21.680574   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.680583   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:21.680589   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:21.680643   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:21.724122   65164 cri.go:89] found id: ""
	I0920 22:23:21.724149   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.724160   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:21.724167   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:21.724224   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:21.758447   65164 cri.go:89] found id: ""
	I0920 22:23:21.758474   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.758482   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:21.758488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:21.758554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:21.799562   65164 cri.go:89] found id: ""
	I0920 22:23:21.799586   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.799594   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:21.799600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:21.799658   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:21.836768   65164 cri.go:89] found id: ""
	I0920 22:23:21.836799   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.836808   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:21.836814   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:21.836871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:21.876750   65164 cri.go:89] found id: ""
	I0920 22:23:21.876770   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.876778   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:21.876784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:21.876873   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:21.912756   65164 cri.go:89] found id: ""
	I0920 22:23:21.912783   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.912792   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:21.912800   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:21.912813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:21.990168   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:21.990191   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:21.990202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:22.094757   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:22.094797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:22.152247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:22.152280   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:22.205505   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:22.205543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:24.720784   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:24.734305   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:24.734369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:24.771142   65164 cri.go:89] found id: ""
	I0920 22:23:24.771171   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.771182   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:24.771190   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:24.771248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:24.807182   65164 cri.go:89] found id: ""
	I0920 22:23:24.807208   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.807216   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:24.807221   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:24.807278   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:24.840805   65164 cri.go:89] found id: ""
	I0920 22:23:24.840836   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.840847   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:24.840854   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:24.840910   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:24.877583   65164 cri.go:89] found id: ""
	I0920 22:23:24.877609   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.877617   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:24.877623   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:24.877677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:24.913485   65164 cri.go:89] found id: ""
	I0920 22:23:24.913518   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.913529   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:24.913536   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:24.913594   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:24.949441   65164 cri.go:89] found id: ""
	I0920 22:23:24.949466   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.949475   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:24.949481   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:24.949527   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:24.985355   65164 cri.go:89] found id: ""
	I0920 22:23:24.985377   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.985386   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:24.985391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:24.985442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:25.021343   65164 cri.go:89] found id: ""
	I0920 22:23:25.021367   65164 logs.go:276] 0 containers: []
	W0920 22:23:25.021375   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:25.021383   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:25.021393   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:25.063475   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:25.063509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:25.113397   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:25.113436   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:25.127946   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:25.127981   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:25.202313   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:25.202334   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:25.202349   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:27.784353   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:27.800363   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:27.800439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:27.861988   65164 cri.go:89] found id: ""
	I0920 22:23:27.862016   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.862026   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:27.862034   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:27.862093   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:27.925087   65164 cri.go:89] found id: ""
	I0920 22:23:27.925114   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.925126   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:27.925133   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:27.925194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:27.959757   65164 cri.go:89] found id: ""
	I0920 22:23:27.959785   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.959796   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:27.959804   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:27.959857   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:27.998853   65164 cri.go:89] found id: ""
	I0920 22:23:27.998881   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.998891   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:27.998898   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:27.998955   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:28.034316   65164 cri.go:89] found id: ""
	I0920 22:23:28.034338   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.034347   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:28.034352   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:28.034402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:28.068834   65164 cri.go:89] found id: ""
	I0920 22:23:28.068857   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.068865   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:28.068871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:28.068921   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:28.103599   65164 cri.go:89] found id: ""
	I0920 22:23:28.103631   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.103642   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:28.103649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:28.103713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:28.139211   65164 cri.go:89] found id: ""
	I0920 22:23:28.139235   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.139243   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:28.139250   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:28.139261   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:28.191788   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:28.191825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:28.208453   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:28.208487   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:28.280469   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:28.280488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:28.280500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:28.357234   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:28.357277   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:30.903301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:30.916515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:30.916595   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:30.950403   65164 cri.go:89] found id: ""
	I0920 22:23:30.950427   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.950435   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:30.950442   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:30.950488   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:30.987361   65164 cri.go:89] found id: ""
	I0920 22:23:30.987383   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.987392   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:30.987398   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:30.987447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:31.025688   65164 cri.go:89] found id: ""
	I0920 22:23:31.025712   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.025720   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:31.025727   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:31.025789   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:31.064227   65164 cri.go:89] found id: ""
	I0920 22:23:31.064253   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.064261   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:31.064268   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:31.064313   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:31.100029   65164 cri.go:89] found id: ""
	I0920 22:23:31.100059   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.100067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:31.100073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:31.100118   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:31.134290   65164 cri.go:89] found id: ""
	I0920 22:23:31.134328   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.134339   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:31.134346   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:31.134404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:31.174453   65164 cri.go:89] found id: ""
	I0920 22:23:31.174483   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.174493   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:31.174500   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:31.174558   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:31.209768   65164 cri.go:89] found id: ""
	I0920 22:23:31.209792   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.209801   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:31.209812   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:31.209823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:31.288640   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:31.288674   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:31.329681   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:31.329707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:31.388046   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:31.388090   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:31.402872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:31.402900   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:31.476528   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:33.977657   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:33.991331   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:33.991407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:34.027932   65164 cri.go:89] found id: ""
	I0920 22:23:34.027960   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.027970   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:34.027979   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:34.028036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:34.070369   65164 cri.go:89] found id: ""
	I0920 22:23:34.070395   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.070403   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:34.070411   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:34.070471   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:34.107467   65164 cri.go:89] found id: ""
	I0920 22:23:34.107493   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.107501   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:34.107507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:34.107556   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:34.148177   65164 cri.go:89] found id: ""
	I0920 22:23:34.148203   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.148211   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:34.148217   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:34.148274   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:34.185721   65164 cri.go:89] found id: ""
	I0920 22:23:34.185751   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.185762   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:34.185775   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:34.185834   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:34.221089   65164 cri.go:89] found id: ""
	I0920 22:23:34.221110   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.221118   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:34.221124   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:34.221167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:34.256222   65164 cri.go:89] found id: ""
	I0920 22:23:34.256244   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.256252   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:34.256257   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:34.256305   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:34.291640   65164 cri.go:89] found id: ""
	I0920 22:23:34.291665   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.291676   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:34.291688   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:34.291702   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:34.342602   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:34.342642   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:34.356770   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:34.356798   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:34.425461   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:34.425488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:34.425503   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:34.505552   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:34.505591   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:37.045098   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:37.058954   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:37.059047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:37.094828   65164 cri.go:89] found id: ""
	I0920 22:23:37.094857   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.094867   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:37.094875   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:37.094935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:37.133943   65164 cri.go:89] found id: ""
	I0920 22:23:37.133975   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.133985   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:37.133996   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:37.134056   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:37.171931   65164 cri.go:89] found id: ""
	I0920 22:23:37.171976   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.171988   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:37.171996   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:37.172064   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:37.209342   65164 cri.go:89] found id: ""
	I0920 22:23:37.209367   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.209375   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:37.209381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:37.209431   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:37.250832   65164 cri.go:89] found id: ""
	I0920 22:23:37.250866   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.250877   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:37.250885   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:37.250984   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:37.292507   65164 cri.go:89] found id: ""
	I0920 22:23:37.292533   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.292541   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:37.292547   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:37.292603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:37.326576   65164 cri.go:89] found id: ""
	I0920 22:23:37.326604   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.326612   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:37.326618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:37.326675   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:37.362983   65164 cri.go:89] found id: ""
	I0920 22:23:37.363010   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.363020   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:37.363030   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:37.363043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:37.418526   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:37.418564   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:37.434155   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:37.434180   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:37.521264   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:37.521292   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:37.521303   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:37.603628   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:37.603663   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:40.146295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:40.160861   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:40.160932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:40.194739   65164 cri.go:89] found id: ""
	I0920 22:23:40.194762   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.194769   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:40.194776   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:40.194836   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:40.230753   65164 cri.go:89] found id: ""
	I0920 22:23:40.230781   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.230792   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:40.230800   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:40.230860   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:40.268363   65164 cri.go:89] found id: ""
	I0920 22:23:40.268392   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.268403   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:40.268411   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:40.268468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:40.307882   65164 cri.go:89] found id: ""
	I0920 22:23:40.307916   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.307925   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:40.307932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:40.307993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:40.344334   65164 cri.go:89] found id: ""
	I0920 22:23:40.344367   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.344378   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:40.344385   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:40.344445   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:40.385113   65164 cri.go:89] found id: ""
	I0920 22:23:40.385146   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.385157   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:40.385168   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:40.385269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:40.422532   65164 cri.go:89] found id: ""
	I0920 22:23:40.422557   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.422568   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:40.422576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:40.422635   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:40.463425   65164 cri.go:89] found id: ""
	I0920 22:23:40.463451   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.463462   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:40.463472   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:40.463486   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:40.518314   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:40.518350   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:40.534689   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:40.534718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:40.610484   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:40.610509   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:40.610525   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:40.697040   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:40.697076   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:43.240502   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:43.254888   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:43.254989   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:43.294646   65164 cri.go:89] found id: ""
	I0920 22:23:43.294672   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.294686   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:43.294699   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:43.294763   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:43.330802   65164 cri.go:89] found id: ""
	I0920 22:23:43.330828   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.330839   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:43.330846   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:43.330911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:43.369668   65164 cri.go:89] found id: ""
	I0920 22:23:43.369692   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.369702   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:43.369710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:43.369780   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:43.409992   65164 cri.go:89] found id: ""
	I0920 22:23:43.410032   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.410045   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:43.410054   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:43.410116   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:43.451838   65164 cri.go:89] found id: ""
	I0920 22:23:43.451870   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.451882   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:43.451889   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:43.451945   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:43.489973   65164 cri.go:89] found id: ""
	I0920 22:23:43.490010   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.490021   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:43.490027   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:43.490087   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:43.529027   65164 cri.go:89] found id: ""
	I0920 22:23:43.529051   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.529060   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:43.529066   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:43.529121   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:43.567324   65164 cri.go:89] found id: ""
	I0920 22:23:43.567354   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.567366   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:43.567377   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:43.567390   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:43.618859   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:43.618899   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:43.632929   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:43.632970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:43.713084   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:43.713108   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:43.713123   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:43.795977   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:43.796018   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:46.337737   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:46.352031   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:46.352117   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:46.391292   65164 cri.go:89] found id: ""
	I0920 22:23:46.391316   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.391328   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:46.391335   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:46.391397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:46.432554   65164 cri.go:89] found id: ""
	I0920 22:23:46.432576   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.432584   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:46.432590   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:46.432669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:46.469609   65164 cri.go:89] found id: ""
	I0920 22:23:46.469638   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.469649   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:46.469657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:46.469718   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:46.509671   65164 cri.go:89] found id: ""
	I0920 22:23:46.509702   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.509714   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:46.509721   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:46.509791   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:46.549707   65164 cri.go:89] found id: ""
	I0920 22:23:46.549737   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.549747   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:46.549755   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:46.549825   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:46.591886   65164 cri.go:89] found id: ""
	I0920 22:23:46.591914   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.591924   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:46.591931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:46.591982   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:46.629358   65164 cri.go:89] found id: ""
	I0920 22:23:46.629384   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.629391   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:46.629397   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:46.629447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:46.664854   65164 cri.go:89] found id: ""
	I0920 22:23:46.664880   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.664888   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:46.664897   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:46.664908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:46.706941   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:46.706994   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:46.760617   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:46.760651   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:46.774526   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:46.774552   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:46.841947   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:46.841977   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:46.841995   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.423720   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:49.439375   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:49.439466   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:49.476032   65164 cri.go:89] found id: ""
	I0920 22:23:49.476056   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.476064   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:49.476070   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:49.476129   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:49.513099   65164 cri.go:89] found id: ""
	I0920 22:23:49.513128   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.513138   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:49.513144   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:49.513194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:49.551645   65164 cri.go:89] found id: ""
	I0920 22:23:49.551671   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.551682   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:49.551689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:49.551750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:49.592269   65164 cri.go:89] found id: ""
	I0920 22:23:49.592295   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.592305   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:49.592312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:49.592371   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:49.629934   65164 cri.go:89] found id: ""
	I0920 22:23:49.629963   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.629971   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:49.629978   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:49.630033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:49.668013   65164 cri.go:89] found id: ""
	I0920 22:23:49.668041   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.668051   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:49.668058   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:49.668119   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:49.706531   65164 cri.go:89] found id: ""
	I0920 22:23:49.706561   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.706572   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:49.706580   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:49.706644   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:49.744130   65164 cri.go:89] found id: ""
	I0920 22:23:49.744168   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.744180   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:49.744192   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:49.744212   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:49.814987   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:49.815015   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:49.815031   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.897468   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:49.897505   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:49.936778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:49.936804   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:49.990840   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:49.990884   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:52.505554   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:52.520975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:52.521043   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:52.565302   65164 cri.go:89] found id: ""
	I0920 22:23:52.565331   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.565340   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:52.565346   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:52.565404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:52.606315   65164 cri.go:89] found id: ""
	I0920 22:23:52.606342   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.606352   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:52.606359   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:52.606416   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:52.641533   65164 cri.go:89] found id: ""
	I0920 22:23:52.641560   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.641574   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:52.641581   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:52.641640   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:52.677372   65164 cri.go:89] found id: ""
	I0920 22:23:52.677400   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.677412   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:52.677419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:52.677479   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:52.714016   65164 cri.go:89] found id: ""
	I0920 22:23:52.714056   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.714067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:52.714075   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:52.714131   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:52.748194   65164 cri.go:89] found id: ""
	I0920 22:23:52.748224   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.748235   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:52.748243   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:52.748306   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:52.785329   65164 cri.go:89] found id: ""
	I0920 22:23:52.785356   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.785366   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:52.785373   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:52.785430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:52.818350   65164 cri.go:89] found id: ""
	I0920 22:23:52.818382   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.818401   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:52.818414   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:52.818429   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:52.862819   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:52.862854   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:52.914773   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:52.914811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:52.928560   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:52.928589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:53.005530   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:53.005553   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:53.005568   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:55.589457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:55.604545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:55.604634   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:55.643361   65164 cri.go:89] found id: ""
	I0920 22:23:55.643385   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.643394   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:55.643404   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:55.643458   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:55.678738   65164 cri.go:89] found id: ""
	I0920 22:23:55.678762   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.678770   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:55.678775   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:55.678820   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:55.714107   65164 cri.go:89] found id: ""
	I0920 22:23:55.714137   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.714153   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:55.714160   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:55.714221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:55.749093   65164 cri.go:89] found id: ""
	I0920 22:23:55.749118   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.749127   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:55.749135   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:55.749194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:55.786138   65164 cri.go:89] found id: ""
	I0920 22:23:55.786166   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.786177   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:55.786185   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:55.786243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:55.820406   65164 cri.go:89] found id: ""
	I0920 22:23:55.820431   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.820440   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:55.820446   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:55.820509   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:55.854792   65164 cri.go:89] found id: ""
	I0920 22:23:55.854820   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.854829   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:55.854834   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:55.854889   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:55.891926   65164 cri.go:89] found id: ""
	I0920 22:23:55.891957   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.891967   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:55.891977   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:55.891991   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:55.947642   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:55.947677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:55.961872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:55.961903   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:56.038004   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:56.038035   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:56.038049   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:56.129637   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:56.129682   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:58.670325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:58.685501   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:58.685564   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:58.728232   65164 cri.go:89] found id: ""
	I0920 22:23:58.728262   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.728274   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:58.728282   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:58.728339   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:58.762617   65164 cri.go:89] found id: ""
	I0920 22:23:58.762647   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.762657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:58.762665   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:58.762727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:58.799427   65164 cri.go:89] found id: ""
	I0920 22:23:58.799454   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.799464   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:58.799471   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:58.799533   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:58.840100   65164 cri.go:89] found id: ""
	I0920 22:23:58.840126   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.840133   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:58.840140   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:58.840202   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:58.879097   65164 cri.go:89] found id: ""
	I0920 22:23:58.879123   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.879131   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:58.879137   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:58.879185   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:58.914839   65164 cri.go:89] found id: ""
	I0920 22:23:58.914879   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.914889   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:58.914897   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:58.914954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:58.957890   65164 cri.go:89] found id: ""
	I0920 22:23:58.957917   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.957925   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:58.957931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:58.957992   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:58.996355   65164 cri.go:89] found id: ""
	I0920 22:23:58.996380   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.996390   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:58.996400   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:58.996419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:59.010139   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:59.010162   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:59.083543   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:59.083567   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:59.083584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:59.158430   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:59.158470   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:59.199488   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:59.199515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:01.755931   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:01.770610   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:01.770684   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:01.806498   65164 cri.go:89] found id: ""
	I0920 22:24:01.806532   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.806544   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:01.806552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:01.806611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:01.849461   65164 cri.go:89] found id: ""
	I0920 22:24:01.849495   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.849505   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:01.849512   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:01.849570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:01.886634   65164 cri.go:89] found id: ""
	I0920 22:24:01.886674   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.886686   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:01.886693   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:01.886756   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:01.921036   65164 cri.go:89] found id: ""
	I0920 22:24:01.921064   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.921072   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:01.921079   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:01.921138   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:01.959066   65164 cri.go:89] found id: ""
	I0920 22:24:01.959097   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.959109   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:01.959116   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:01.959182   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:01.997105   65164 cri.go:89] found id: ""
	I0920 22:24:01.997135   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.997146   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:01.997155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:01.997215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:02.039220   65164 cri.go:89] found id: ""
	I0920 22:24:02.039245   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.039254   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:02.039259   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:02.039318   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:02.080646   65164 cri.go:89] found id: ""
	I0920 22:24:02.080675   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.080684   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:02.080691   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:02.080704   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:02.132408   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:02.132442   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:02.147039   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:02.147065   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:02.226357   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:02.226383   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:02.226402   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:02.310492   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:02.310527   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:04.854325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:04.869319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:04.869386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:04.904951   65164 cri.go:89] found id: ""
	I0920 22:24:04.904981   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.904992   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:04.905001   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:04.905062   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:04.942283   65164 cri.go:89] found id: ""
	I0920 22:24:04.942307   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.942314   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:04.942319   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:04.942366   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:04.981814   65164 cri.go:89] found id: ""
	I0920 22:24:04.981841   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.981851   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:04.981858   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:04.981914   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:05.016282   65164 cri.go:89] found id: ""
	I0920 22:24:05.016307   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.016316   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:05.016321   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:05.016369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:05.059185   65164 cri.go:89] found id: ""
	I0920 22:24:05.059215   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.059226   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:05.059233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:05.059286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:05.097814   65164 cri.go:89] found id: ""
	I0920 22:24:05.097836   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.097845   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:05.097851   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:05.097904   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:05.133890   65164 cri.go:89] found id: ""
	I0920 22:24:05.133918   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.133926   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:05.133932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:05.133993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:05.170616   65164 cri.go:89] found id: ""
	I0920 22:24:05.170641   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.170649   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:05.170657   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:05.170668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:05.221889   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:05.221921   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:05.235836   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:05.235863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:05.312145   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:05.312172   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:05.312188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:05.397092   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:05.397128   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:07.938892   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:07.954037   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:07.954103   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:07.992847   65164 cri.go:89] found id: ""
	I0920 22:24:07.992872   65164 logs.go:276] 0 containers: []
	W0920 22:24:07.992880   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:07.992887   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:07.992971   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:08.030102   65164 cri.go:89] found id: ""
	I0920 22:24:08.030136   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.030148   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:08.030155   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:08.030215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:08.071156   65164 cri.go:89] found id: ""
	I0920 22:24:08.071187   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.071202   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:08.071209   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:08.071264   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:08.111668   65164 cri.go:89] found id: ""
	I0920 22:24:08.111699   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.111712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:08.111719   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:08.111777   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:08.148289   65164 cri.go:89] found id: ""
	I0920 22:24:08.148322   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.148333   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:08.148340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:08.148402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:08.189154   65164 cri.go:89] found id: ""
	I0920 22:24:08.189176   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.189184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:08.189191   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:08.189242   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:08.225338   65164 cri.go:89] found id: ""
	I0920 22:24:08.225363   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.225374   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:08.225381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:08.225439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:08.274986   65164 cri.go:89] found id: ""
	I0920 22:24:08.275015   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.275026   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:08.275036   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:08.275052   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:08.376769   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:08.376801   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:08.376819   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:08.455971   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:08.456008   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:08.496996   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:08.497026   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:08.549634   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:08.549669   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:11.065317   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:11.079438   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:11.079521   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:11.113628   65164 cri.go:89] found id: ""
	I0920 22:24:11.113657   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.113673   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:11.113680   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:11.113741   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:11.147698   65164 cri.go:89] found id: ""
	I0920 22:24:11.147727   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.147738   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:11.147745   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:11.147810   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:11.185153   65164 cri.go:89] found id: ""
	I0920 22:24:11.185183   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.185194   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:11.185202   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:11.185261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:11.220575   65164 cri.go:89] found id: ""
	I0920 22:24:11.220602   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.220611   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:11.220619   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:11.220677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:11.257062   65164 cri.go:89] found id: ""
	I0920 22:24:11.257095   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.257107   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:11.257117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:11.257174   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:11.295945   65164 cri.go:89] found id: ""
	I0920 22:24:11.295972   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.295982   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:11.295987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:11.296037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:11.335049   65164 cri.go:89] found id: ""
	I0920 22:24:11.335072   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.335080   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:11.335085   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:11.335134   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:11.377047   65164 cri.go:89] found id: ""
	I0920 22:24:11.377074   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.377085   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:11.377094   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:11.377108   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:11.430338   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:11.430364   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:11.445225   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:11.445249   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:11.517988   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:11.518010   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:11.518024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:11.596776   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:11.596813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:14.140982   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:14.158445   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:14.158524   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:14.208478   65164 cri.go:89] found id: ""
	I0920 22:24:14.208508   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.208518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:14.208526   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:14.208585   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:14.266224   65164 cri.go:89] found id: ""
	I0920 22:24:14.266256   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.266268   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:14.266276   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:14.266334   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:14.309917   65164 cri.go:89] found id: ""
	I0920 22:24:14.309943   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.309954   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:14.309968   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:14.310026   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:14.348491   65164 cri.go:89] found id: ""
	I0920 22:24:14.348516   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.348523   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:14.348530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:14.348575   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:14.389170   65164 cri.go:89] found id: ""
	I0920 22:24:14.389197   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.389205   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:14.389212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:14.389261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:14.426006   65164 cri.go:89] found id: ""
	I0920 22:24:14.426034   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.426045   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:14.426052   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:14.426113   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:14.461496   65164 cri.go:89] found id: ""
	I0920 22:24:14.461519   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.461526   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:14.461532   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:14.461588   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:14.501424   65164 cri.go:89] found id: ""
	I0920 22:24:14.501457   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.501474   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:14.501486   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:14.501499   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:14.550920   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:14.550954   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:14.565293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:14.565322   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:14.638031   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:14.638059   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:14.638071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:14.715631   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:14.715697   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:17.256947   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:17.269939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:17.270009   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:17.305349   65164 cri.go:89] found id: ""
	I0920 22:24:17.305373   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.305384   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:17.305391   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:17.305453   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:17.347917   65164 cri.go:89] found id: ""
	I0920 22:24:17.347947   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.347959   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:17.347967   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:17.348024   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:17.383366   65164 cri.go:89] found id: ""
	I0920 22:24:17.383387   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.383396   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:17.383402   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:17.383462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:17.418055   65164 cri.go:89] found id: ""
	I0920 22:24:17.418079   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.418086   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:17.418092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:17.418156   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:17.457651   65164 cri.go:89] found id: ""
	I0920 22:24:17.457680   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.457691   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:17.457698   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:17.457758   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:17.492527   65164 cri.go:89] found id: ""
	I0920 22:24:17.492557   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.492568   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:17.492576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:17.492627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:17.528964   65164 cri.go:89] found id: ""
	I0920 22:24:17.528993   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.529004   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:17.529017   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:17.529080   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:17.565286   65164 cri.go:89] found id: ""
	I0920 22:24:17.565315   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.565326   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:17.565336   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:17.565351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:17.617255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:17.617290   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:17.631897   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:17.631925   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:17.700829   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:17.700851   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:17.700863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:17.782597   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:17.782633   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.327305   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:20.340641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:20.340731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:20.378129   65164 cri.go:89] found id: ""
	I0920 22:24:20.378156   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.378167   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:20.378174   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:20.378228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:20.414202   65164 cri.go:89] found id: ""
	I0920 22:24:20.414230   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.414240   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:20.414247   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:20.414307   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:20.451599   65164 cri.go:89] found id: ""
	I0920 22:24:20.451631   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.451641   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:20.451649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:20.451708   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:20.486998   65164 cri.go:89] found id: ""
	I0920 22:24:20.487026   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.487034   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:20.487040   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:20.487091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:20.525623   65164 cri.go:89] found id: ""
	I0920 22:24:20.525651   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.525661   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:20.525669   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:20.525727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:20.560930   65164 cri.go:89] found id: ""
	I0920 22:24:20.560956   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.560967   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:20.560975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:20.561037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:20.598358   65164 cri.go:89] found id: ""
	I0920 22:24:20.598385   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.598396   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:20.598403   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:20.598468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:20.633645   65164 cri.go:89] found id: ""
	I0920 22:24:20.633671   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.633683   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:20.633694   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:20.633707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.678847   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:20.678880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:20.731758   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:20.731803   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:20.747298   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:20.747328   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:20.822433   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:20.822459   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:20.822473   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:23.401037   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:23.414055   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:23.414135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:23.452483   65164 cri.go:89] found id: ""
	I0920 22:24:23.452511   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.452522   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:23.452529   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:23.452587   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:23.491951   65164 cri.go:89] found id: ""
	I0920 22:24:23.491979   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.491987   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:23.491993   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:23.492039   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:23.533381   65164 cri.go:89] found id: ""
	I0920 22:24:23.533413   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.533424   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:23.533432   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:23.533492   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:23.572442   65164 cri.go:89] found id: ""
	I0920 22:24:23.572475   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.572485   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:23.572493   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:23.572554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:23.609161   65164 cri.go:89] found id: ""
	I0920 22:24:23.609189   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.609200   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:23.609207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:23.609272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:23.645692   65164 cri.go:89] found id: ""
	I0920 22:24:23.645721   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.645730   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:23.645737   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:23.645824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:23.681332   65164 cri.go:89] found id: ""
	I0920 22:24:23.681355   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.681363   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:23.681368   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:23.681414   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:23.716497   65164 cri.go:89] found id: ""
	I0920 22:24:23.716524   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.716532   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:23.716541   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:23.716551   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:23.794446   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:23.794485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:23.839598   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:23.839625   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:23.893259   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:23.893295   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:23.907430   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:23.907456   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:23.982193   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:26.482877   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:26.497511   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:26.497590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:26.534711   65164 cri.go:89] found id: ""
	I0920 22:24:26.534732   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.534739   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:26.534745   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:26.534790   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:26.569664   65164 cri.go:89] found id: ""
	I0920 22:24:26.569696   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.569706   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:26.569714   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:26.569772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:26.610623   65164 cri.go:89] found id: ""
	I0920 22:24:26.610651   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.610663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:26.610671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:26.610726   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:26.648337   65164 cri.go:89] found id: ""
	I0920 22:24:26.648368   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.648379   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:26.648387   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:26.648446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:26.687633   65164 cri.go:89] found id: ""
	I0920 22:24:26.687661   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.687673   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:26.687680   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:26.687752   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:26.724361   65164 cri.go:89] found id: ""
	I0920 22:24:26.724390   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.724410   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:26.724419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:26.724475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:26.766101   65164 cri.go:89] found id: ""
	I0920 22:24:26.766127   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.766137   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:26.766144   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:26.766196   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:26.805844   65164 cri.go:89] found id: ""
	I0920 22:24:26.805872   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.805883   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:26.805894   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:26.805907   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:26.880703   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:26.880725   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:26.880740   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:26.967599   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:26.967635   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:27.017359   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:27.017384   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:27.071335   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:27.071375   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:29.588374   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:29.602233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:29.602308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:29.639528   65164 cri.go:89] found id: ""
	I0920 22:24:29.639551   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.639562   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:29.639570   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:29.639626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:29.677766   65164 cri.go:89] found id: ""
	I0920 22:24:29.677794   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.677810   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:29.677817   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:29.677874   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:29.714616   65164 cri.go:89] found id: ""
	I0920 22:24:29.714652   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.714663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:29.714671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:29.714722   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:29.750760   65164 cri.go:89] found id: ""
	I0920 22:24:29.750787   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.750795   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:29.750802   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:29.750862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:29.791014   65164 cri.go:89] found id: ""
	I0920 22:24:29.791046   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.791057   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:29.791064   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:29.791135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:29.827114   65164 cri.go:89] found id: ""
	I0920 22:24:29.827146   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.827158   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:29.827166   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:29.827221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:29.866063   65164 cri.go:89] found id: ""
	I0920 22:24:29.866087   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.866097   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:29.866105   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:29.866159   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:29.905616   65164 cri.go:89] found id: ""
	I0920 22:24:29.905646   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.905657   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:29.905668   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:29.905681   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:29.990749   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:29.990787   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:30.030931   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:30.030975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:30.084404   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:30.084440   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:30.100878   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:30.100908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:30.175423   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:32.675815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:32.689449   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:32.689517   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:32.725045   65164 cri.go:89] found id: ""
	I0920 22:24:32.725072   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.725080   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:32.725086   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:32.725139   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:32.760529   65164 cri.go:89] found id: ""
	I0920 22:24:32.760555   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.760566   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:32.760572   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:32.760629   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:32.795500   65164 cri.go:89] found id: ""
	I0920 22:24:32.795526   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.795536   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:32.795543   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:32.795604   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:32.834854   65164 cri.go:89] found id: ""
	I0920 22:24:32.834884   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.834895   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:32.834904   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:32.834972   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:32.875928   65164 cri.go:89] found id: ""
	I0920 22:24:32.875956   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.875967   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:32.875975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:32.876038   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:32.910115   65164 cri.go:89] found id: ""
	I0920 22:24:32.910140   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.910149   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:32.910155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:32.910204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:32.944790   65164 cri.go:89] found id: ""
	I0920 22:24:32.944817   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.944828   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:32.944835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:32.944896   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:32.980012   65164 cri.go:89] found id: ""
	I0920 22:24:32.980043   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.980054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:32.980064   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:32.980077   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:33.032827   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:33.032864   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:33.047478   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:33.047507   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:33.124253   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:33.124280   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:33.124294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:33.200625   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:33.200666   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:35.740556   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:35.755559   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:35.755626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:35.796164   65164 cri.go:89] found id: ""
	I0920 22:24:35.796191   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.796200   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:35.796211   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:35.796260   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:35.832565   65164 cri.go:89] found id: ""
	I0920 22:24:35.832591   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.832601   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:35.832609   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:35.832678   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:35.868115   65164 cri.go:89] found id: ""
	I0920 22:24:35.868138   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.868145   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:35.868152   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:35.868199   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:35.902417   65164 cri.go:89] found id: ""
	I0920 22:24:35.902447   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.902457   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:35.902465   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:35.902531   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:35.939195   65164 cri.go:89] found id: ""
	I0920 22:24:35.939226   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.939242   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:35.939256   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:35.939317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:35.981364   65164 cri.go:89] found id: ""
	I0920 22:24:35.981387   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.981397   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:35.981405   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:35.981462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:36.019938   65164 cri.go:89] found id: ""
	I0920 22:24:36.019968   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.019979   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:36.019987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:36.020049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:36.058764   65164 cri.go:89] found id: ""
	I0920 22:24:36.058788   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.058795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:36.058803   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:36.058812   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:36.098442   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:36.098468   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:36.152150   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:36.152188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:36.168940   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:36.168972   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:36.242112   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:36.242134   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:36.242148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:38.843420   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:38.857665   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:38.857746   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:38.901473   65164 cri.go:89] found id: ""
	I0920 22:24:38.901507   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.901518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:38.901532   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:38.901591   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:38.937980   65164 cri.go:89] found id: ""
	I0920 22:24:38.938012   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.938022   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:38.938030   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:38.938085   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:38.973476   65164 cri.go:89] found id: ""
	I0920 22:24:38.973506   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.973514   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:38.973520   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:38.973579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:39.009144   65164 cri.go:89] found id: ""
	I0920 22:24:39.009177   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.009197   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:39.009205   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:39.009282   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:39.046471   65164 cri.go:89] found id: ""
	I0920 22:24:39.046495   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.046505   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:39.046513   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:39.046570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:39.080274   65164 cri.go:89] found id: ""
	I0920 22:24:39.080303   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.080315   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:39.080323   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:39.080381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:39.117267   65164 cri.go:89] found id: ""
	I0920 22:24:39.117297   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.117307   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:39.117314   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:39.117381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:39.158015   65164 cri.go:89] found id: ""
	I0920 22:24:39.158043   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.158054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:39.158065   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:39.158079   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:39.210324   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:39.210360   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:39.226827   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:39.226857   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:39.299060   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:39.299088   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:39.299157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:39.386377   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:39.386420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:41.925762   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:41.939840   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:41.939911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:41.979910   65164 cri.go:89] found id: ""
	I0920 22:24:41.979943   65164 logs.go:276] 0 containers: []
	W0920 22:24:41.979964   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:41.979973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:41.980036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:42.018711   65164 cri.go:89] found id: ""
	I0920 22:24:42.018738   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.018747   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:42.018752   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:42.018796   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:42.056718   65164 cri.go:89] found id: ""
	I0920 22:24:42.056741   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.056749   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:42.056756   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:42.056826   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:42.093609   65164 cri.go:89] found id: ""
	I0920 22:24:42.093639   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.093650   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:42.093657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:42.093729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:42.130129   65164 cri.go:89] found id: ""
	I0920 22:24:42.130157   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.130167   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:42.130175   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:42.130233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:42.166747   65164 cri.go:89] found id: ""
	I0920 22:24:42.166796   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.166807   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:42.166815   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:42.166875   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:42.213926   65164 cri.go:89] found id: ""
	I0920 22:24:42.213960   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.213971   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:42.213982   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:42.214047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:42.251645   65164 cri.go:89] found id: ""
	I0920 22:24:42.251671   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.251681   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:42.251693   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:42.251707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:42.265133   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:42.265165   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:42.340018   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:42.340044   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:42.340058   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:42.435006   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:42.435056   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:42.478991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:42.479022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:45.032066   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:45.048658   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:45.048734   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:45.086708   65164 cri.go:89] found id: ""
	I0920 22:24:45.086733   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.086743   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:45.086750   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:45.086815   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:45.124311   65164 cri.go:89] found id: ""
	I0920 22:24:45.124342   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.124353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:45.124360   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:45.124407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:45.164253   65164 cri.go:89] found id: ""
	I0920 22:24:45.164283   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.164293   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:45.164301   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:45.164364   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:45.206382   65164 cri.go:89] found id: ""
	I0920 22:24:45.206412   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.206421   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:45.206429   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:45.206494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:45.241949   65164 cri.go:89] found id: ""
	I0920 22:24:45.241977   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.241986   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:45.241991   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:45.242052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:45.283718   65164 cri.go:89] found id: ""
	I0920 22:24:45.283743   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.283752   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:45.283757   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:45.283812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:45.331523   65164 cri.go:89] found id: ""
	I0920 22:24:45.331564   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.331576   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:45.331585   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:45.331672   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:45.372428   65164 cri.go:89] found id: ""
	I0920 22:24:45.372459   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.372469   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:45.372479   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:45.372496   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:45.387751   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:45.387786   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:45.470354   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:45.470377   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:45.470392   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:45.557865   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:45.557902   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:45.600123   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:45.600153   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:48.153277   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:48.167796   65164 kubeadm.go:597] duration metric: took 4m4.375477562s to restartPrimaryControlPlane
	W0920 22:24:48.167875   65164 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:24:48.167906   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:24:48.626343   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:24:48.644488   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:24:48.655891   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:24:48.666885   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:24:48.666910   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:24:48.666985   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:24:48.676979   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:24:48.677048   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:24:48.687890   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:24:48.697890   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:24:48.697955   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:24:48.708412   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.719024   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:24:48.719093   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.729321   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:24:48.739258   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:24:48.739329   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:24:48.749787   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:24:48.825173   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:24:48.825258   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:24:48.982428   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:24:48.982587   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:24:48.982721   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:24:49.194912   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:24:49.196899   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:24:49.196999   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:24:49.197078   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:24:49.197222   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:24:49.197328   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:24:49.197421   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:24:49.197495   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:24:49.197574   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:24:49.197662   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:24:49.197761   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:24:49.197887   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:24:49.197948   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:24:49.198036   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:24:49.468998   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:24:49.613979   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:24:49.868901   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:24:50.071143   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:24:50.093179   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:24:50.094030   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:24:50.094086   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:24:50.232993   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:24:50.234761   65164 out.go:235]   - Booting up control plane ...
	I0920 22:24:50.234871   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:24:50.247358   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:24:50.248714   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:24:50.249690   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:24:50.252874   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:25:30.254069   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:25:30.255250   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:30.255483   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:35.255954   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:35.256195   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:45.256746   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:45.257006   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:05.258121   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:05.258367   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:45.260713   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:45.260972   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:45.260987   65164 kubeadm.go:310] 
	I0920 22:26:45.261036   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:26:45.261079   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:26:45.261089   65164 kubeadm.go:310] 
	I0920 22:26:45.261139   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:26:45.261186   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:26:45.261335   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:26:45.261345   65164 kubeadm.go:310] 
	I0920 22:26:45.261491   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:26:45.261525   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:26:45.261554   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:26:45.261560   65164 kubeadm.go:310] 
	I0920 22:26:45.261648   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:26:45.261717   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:26:45.261723   65164 kubeadm.go:310] 
	I0920 22:26:45.261812   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:26:45.261917   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:26:45.262021   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:26:45.262113   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:26:45.262126   65164 kubeadm.go:310] 
	I0920 22:26:45.263170   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:26:45.263288   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:26:45.263381   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 22:26:45.263497   65164 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 22:26:45.263544   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:26:50.654066   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.390496163s)
	I0920 22:26:50.654143   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:26:50.670366   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:26:50.682320   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:26:50.682340   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:26:50.682395   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:26:50.692427   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:26:50.692501   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:26:50.703008   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:26:50.713027   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:26:50.713090   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:26:50.724408   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.733746   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:26:50.733805   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.744669   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:26:50.753996   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:26:50.754061   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:26:50.764484   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:26:50.836541   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:26:50.836606   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:26:50.979844   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:26:50.979987   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:26:50.980117   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:26:51.172382   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:26:51.174186   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:26:51.174313   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:26:51.174410   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:26:51.174550   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:26:51.174640   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:26:51.174768   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:26:51.174855   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:26:51.175252   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:26:51.175637   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:26:51.176082   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:26:51.176400   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:26:51.176484   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:26:51.176562   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:26:51.391581   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:26:51.474902   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:26:51.620828   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:26:51.838880   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:26:51.856375   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:26:51.857522   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:26:51.857615   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:26:52.014694   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:26:52.016872   65164 out.go:235]   - Booting up control plane ...
	I0920 22:26:52.017002   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:26:52.019915   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:26:52.021285   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:26:52.022532   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:26:52.026230   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:27:32.029658   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:27:32.029764   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:32.030051   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:37.030416   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:37.030634   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:47.031397   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:47.031644   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:07.031668   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:07.031918   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030492   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:47.030797   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030821   65164 kubeadm.go:310] 
	I0920 22:28:47.030874   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:28:47.030912   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:28:47.030919   65164 kubeadm.go:310] 
	I0920 22:28:47.030982   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:28:47.031046   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:28:47.031204   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:28:47.031225   65164 kubeadm.go:310] 
	I0920 22:28:47.031371   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:28:47.031425   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:28:47.031462   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:28:47.031466   65164 kubeadm.go:310] 
	I0920 22:28:47.031605   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:28:47.031727   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:28:47.031739   65164 kubeadm.go:310] 
	I0920 22:28:47.031880   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:28:47.032003   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:28:47.032102   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:28:47.032202   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:28:47.032212   65164 kubeadm.go:310] 
	I0920 22:28:47.033045   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:28:47.033157   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:28:47.033247   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 22:28:47.033315   65164 kubeadm.go:394] duration metric: took 8m3.299765846s to StartCluster
	I0920 22:28:47.033360   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:28:47.033452   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:28:47.081201   65164 cri.go:89] found id: ""
	I0920 22:28:47.081225   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.081233   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:28:47.081239   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:28:47.081289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:28:47.119048   65164 cri.go:89] found id: ""
	I0920 22:28:47.119081   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.119089   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:28:47.119094   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:28:47.119140   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:28:47.164690   65164 cri.go:89] found id: ""
	I0920 22:28:47.164719   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.164730   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:28:47.164738   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:28:47.164806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:28:47.201003   65164 cri.go:89] found id: ""
	I0920 22:28:47.201035   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.201046   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:28:47.201053   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:28:47.201114   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:28:47.238478   65164 cri.go:89] found id: ""
	I0920 22:28:47.238510   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.238522   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:28:47.238530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:28:47.238579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:28:47.274372   65164 cri.go:89] found id: ""
	I0920 22:28:47.274406   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.274415   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:28:47.274422   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:28:47.274487   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:28:47.310155   65164 cri.go:89] found id: ""
	I0920 22:28:47.310181   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.310191   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:28:47.310197   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:28:47.310243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:28:47.350845   65164 cri.go:89] found id: ""
	I0920 22:28:47.350872   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.350882   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:28:47.350894   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:28:47.350908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:28:47.403879   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:28:47.403915   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:28:47.418989   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:28:47.419021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:28:47.511632   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:28:47.511697   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:28:47.511718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:28:47.635613   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:28:47.635653   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0920 22:28:47.677326   65164 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 22:28:47.677383   65164 out.go:270] * 
	* 
	W0920 22:28:47.677436   65164 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.677449   65164 out.go:270] * 
	* 
	W0920 22:28:47.678365   65164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 22:28:47.681590   65164 out.go:201] 
	W0920 22:28:47.682658   65164 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.682696   65164 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 22:28:47.682719   65164 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 22:28:47.684042   65164 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-593412 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412: exit status 2 (239.955901ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-593412 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-593412 logs -n 25: (1.637614787s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-335786 -- sudo                         | cert-options-335786          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-335786                                 | cert-options-335786          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-696728                           | kubernetes-upgrade-696728    | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:11 UTC |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-032884             | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-440582 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | disable-driver-mounts-440582                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:14 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-562750            | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799783  | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-032884                  | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-593412        | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-562750                 | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC | 20 Sep 24 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799783       | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:25 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-593412             | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC | 20 Sep 24 22:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 22:17:05
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 22:17:05.861648   65164 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:17:05.861774   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.861787   65164 out.go:358] Setting ErrFile to fd 2...
	I0920 22:17:05.861792   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.862021   65164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:17:05.862599   65164 out.go:352] Setting JSON to false
	I0920 22:17:05.863528   65164 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7174,"bootTime":1726863452,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:17:05.863630   65164 start.go:139] virtualization: kvm guest
	I0920 22:17:05.866077   65164 out.go:177] * [old-k8s-version-593412] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:17:05.867484   65164 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:17:05.867531   65164 notify.go:220] Checking for updates...
	I0920 22:17:05.869987   65164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:17:05.871483   65164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:17:05.872745   65164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:17:05.874062   65164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:17:05.875089   65164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:17:05.876578   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:17:05.876975   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.877047   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.891847   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I0920 22:17:05.892301   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.892821   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.892841   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.893134   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.893312   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.895266   65164 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 22:17:05.896322   65164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:17:05.896637   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.896671   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.912301   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37681
	I0920 22:17:05.912667   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.913170   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.913189   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.913475   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.913672   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.949827   65164 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 22:17:05.951258   65164 start.go:297] selected driver: kvm2
	I0920 22:17:05.951275   65164 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.951380   65164 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:17:05.952032   65164 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.952098   65164 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 22:17:05.967645   65164 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 22:17:05.968048   65164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:17:05.968078   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:17:05.968122   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:17:05.968161   65164 start.go:340] cluster config:
	{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.968267   65164 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.970080   65164 out.go:177] * Starting "old-k8s-version-593412" primary control-plane node in "old-k8s-version-593412" cluster
	I0920 22:17:03.331236   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:06.403275   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:05.971180   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:17:05.971218   65164 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 22:17:05.971227   65164 cache.go:56] Caching tarball of preloaded images
	I0920 22:17:05.971292   65164 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 22:17:05.971301   65164 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 22:17:05.971394   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:17:05.971564   65164 start.go:360] acquireMachinesLock for old-k8s-version-593412: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:17:12.483250   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:15.555283   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:21.635247   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:24.707272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:30.787265   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:33.859227   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:39.939281   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:43.011250   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:49.091264   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:52.163272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:58.243272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:01.315252   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:07.395256   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:10.467248   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:16.547257   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:19.619309   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:25.699278   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:28.771275   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:34.851234   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:37.923271   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:44.003246   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:47.075274   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:53.155254   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:56.227303   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:02.307307   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:05.379257   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:11.459278   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:14.531267   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:20.611255   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:23.683284   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:29.763270   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:32.835325   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:35.840483   64543 start.go:364] duration metric: took 3m48.294081485s to acquireMachinesLock for "embed-certs-562750"
	I0920 22:19:35.840533   64543 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:19:35.840542   64543 fix.go:54] fixHost starting: 
	I0920 22:19:35.840886   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:19:35.840934   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:19:35.856524   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I0920 22:19:35.856933   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:19:35.857405   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:19:35.857419   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:19:35.857754   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:19:35.857958   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:35.858095   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:19:35.859845   64543 fix.go:112] recreateIfNeeded on embed-certs-562750: state=Stopped err=<nil>
	I0920 22:19:35.859876   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	W0920 22:19:35.860023   64543 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:19:35.861973   64543 out.go:177] * Restarting existing kvm2 VM for "embed-certs-562750" ...
	I0920 22:19:35.863394   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Start
	I0920 22:19:35.863563   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring networks are active...
	I0920 22:19:35.864411   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring network default is active
	I0920 22:19:35.864746   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring network mk-embed-certs-562750 is active
	I0920 22:19:35.865085   64543 main.go:141] libmachine: (embed-certs-562750) Getting domain xml...
	I0920 22:19:35.865728   64543 main.go:141] libmachine: (embed-certs-562750) Creating domain...
	I0920 22:19:37.094096   64543 main.go:141] libmachine: (embed-certs-562750) Waiting to get IP...
	I0920 22:19:37.095056   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.095580   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.095671   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.095567   66127 retry.go:31] will retry after 248.155801ms: waiting for machine to come up
	I0920 22:19:37.345108   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.345577   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.345634   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.345559   66127 retry.go:31] will retry after 340.822818ms: waiting for machine to come up
	I0920 22:19:35.837792   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:19:35.837839   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:19:35.838152   64076 buildroot.go:166] provisioning hostname "no-preload-032884"
	I0920 22:19:35.838177   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:19:35.838385   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:19:35.840336   64076 machine.go:96] duration metric: took 4m37.421385706s to provisionDockerMachine
	I0920 22:19:35.840382   64076 fix.go:56] duration metric: took 4m37.44364588s for fixHost
	I0920 22:19:35.840392   64076 start.go:83] releasing machines lock for "no-preload-032884", held for 4m37.443690605s
	W0920 22:19:35.840414   64076 start.go:714] error starting host: provision: host is not running
	W0920 22:19:35.840512   64076 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 22:19:35.840521   64076 start.go:729] Will try again in 5 seconds ...
	I0920 22:19:37.688209   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.688587   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.688615   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.688534   66127 retry.go:31] will retry after 321.605412ms: waiting for machine to come up
	I0920 22:19:38.012304   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:38.012829   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:38.012853   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:38.012780   66127 retry.go:31] will retry after 475.245269ms: waiting for machine to come up
	I0920 22:19:38.489354   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:38.489733   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:38.489763   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:38.489687   66127 retry.go:31] will retry after 667.725945ms: waiting for machine to come up
	I0920 22:19:39.158494   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:39.158997   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:39.159024   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:39.158941   66127 retry.go:31] will retry after 582.98832ms: waiting for machine to come up
	I0920 22:19:39.743778   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:39.744147   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:39.744184   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:39.744092   66127 retry.go:31] will retry after 849.746424ms: waiting for machine to come up
	I0920 22:19:40.595098   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:40.595489   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:40.595518   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:40.595440   66127 retry.go:31] will retry after 1.349776305s: waiting for machine to come up
	I0920 22:19:41.947028   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:41.947449   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:41.947472   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:41.947422   66127 retry.go:31] will retry after 1.323416137s: waiting for machine to come up
	I0920 22:19:40.840969   64076 start.go:360] acquireMachinesLock for no-preload-032884: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:19:43.272963   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:43.273397   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:43.273424   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:43.273344   66127 retry.go:31] will retry after 1.986214907s: waiting for machine to come up
	I0920 22:19:45.262662   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:45.263148   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:45.263170   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:45.263095   66127 retry.go:31] will retry after 2.68512379s: waiting for machine to come up
	I0920 22:19:47.950652   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:47.951043   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:47.951069   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:47.950977   66127 retry.go:31] will retry after 3.616930237s: waiting for machine to come up
	I0920 22:19:51.569337   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:51.569741   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:51.569774   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:51.569692   66127 retry.go:31] will retry after 2.803702381s: waiting for machine to come up
	I0920 22:19:55.615900   64930 start.go:364] duration metric: took 3m10.214658006s to acquireMachinesLock for "default-k8s-diff-port-799783"
	I0920 22:19:55.615986   64930 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:19:55.615995   64930 fix.go:54] fixHost starting: 
	I0920 22:19:55.616427   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:19:55.616483   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:19:55.636346   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0920 22:19:55.636780   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:19:55.637258   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:19:55.637279   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:19:55.637578   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:19:55.637770   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:19:55.637911   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:19:55.639942   64930 fix.go:112] recreateIfNeeded on default-k8s-diff-port-799783: state=Stopped err=<nil>
	I0920 22:19:55.639973   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	W0920 22:19:55.640144   64930 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:19:55.642492   64930 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-799783" ...
	I0920 22:19:54.376796   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.377333   64543 main.go:141] libmachine: (embed-certs-562750) Found IP for machine: 192.168.50.193
	I0920 22:19:54.377353   64543 main.go:141] libmachine: (embed-certs-562750) Reserving static IP address...
	I0920 22:19:54.377363   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has current primary IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.377737   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "embed-certs-562750", mac: "52:54:00:be:17:82", ip: "192.168.50.193"} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.377768   64543 main.go:141] libmachine: (embed-certs-562750) DBG | skip adding static IP to network mk-embed-certs-562750 - found existing host DHCP lease matching {name: "embed-certs-562750", mac: "52:54:00:be:17:82", ip: "192.168.50.193"}
	I0920 22:19:54.377787   64543 main.go:141] libmachine: (embed-certs-562750) Reserved static IP address: 192.168.50.193
	I0920 22:19:54.377804   64543 main.go:141] libmachine: (embed-certs-562750) Waiting for SSH to be available...
	I0920 22:19:54.377849   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Getting to WaitForSSH function...
	I0920 22:19:54.379913   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.380197   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.380236   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.380325   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Using SSH client type: external
	I0920 22:19:54.380350   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa (-rw-------)
	I0920 22:19:54.380382   64543 main.go:141] libmachine: (embed-certs-562750) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:19:54.380395   64543 main.go:141] libmachine: (embed-certs-562750) DBG | About to run SSH command:
	I0920 22:19:54.380405   64543 main.go:141] libmachine: (embed-certs-562750) DBG | exit 0
	I0920 22:19:54.507262   64543 main.go:141] libmachine: (embed-certs-562750) DBG | SSH cmd err, output: <nil>: 
	I0920 22:19:54.507667   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetConfigRaw
	I0920 22:19:54.508332   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:54.510768   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.511108   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.511146   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.511441   64543 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/config.json ...
	I0920 22:19:54.511672   64543 machine.go:93] provisionDockerMachine start ...
	I0920 22:19:54.511690   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:54.511883   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.514116   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.514496   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.514524   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.514646   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.514794   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.514931   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.515053   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.515206   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.515457   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.515472   64543 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:19:54.623421   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:19:54.623455   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.623694   64543 buildroot.go:166] provisioning hostname "embed-certs-562750"
	I0920 22:19:54.623717   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.623843   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.626435   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.626733   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.626754   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.626924   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.627122   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.627293   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.627436   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.627594   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.627749   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.627760   64543 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-562750 && echo "embed-certs-562750" | sudo tee /etc/hostname
	I0920 22:19:54.750707   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-562750
	
	I0920 22:19:54.750729   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.753239   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.753512   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.753544   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.753764   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.753968   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.754153   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.754312   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.754481   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.754650   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.754666   64543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-562750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-562750/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-562750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:19:54.872899   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:19:54.872925   64543 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:19:54.872961   64543 buildroot.go:174] setting up certificates
	I0920 22:19:54.872971   64543 provision.go:84] configureAuth start
	I0920 22:19:54.872979   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.873237   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:54.875851   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.876280   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.876302   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.876471   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.878729   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.879117   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.879143   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.879320   64543 provision.go:143] copyHostCerts
	I0920 22:19:54.879370   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:19:54.879378   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:19:54.879467   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:19:54.879581   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:19:54.879591   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:19:54.879624   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:19:54.879694   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:19:54.879703   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:19:54.879736   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:19:54.879799   64543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.embed-certs-562750 san=[127.0.0.1 192.168.50.193 embed-certs-562750 localhost minikube]
	I0920 22:19:54.981669   64543 provision.go:177] copyRemoteCerts
	I0920 22:19:54.981737   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:19:54.981762   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.984500   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.984792   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.984814   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.984986   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.985187   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.985305   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.985424   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.069012   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 22:19:55.094417   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:19:55.119895   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:19:55.145399   64543 provision.go:87] duration metric: took 272.417168ms to configureAuth
	I0920 22:19:55.145433   64543 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:19:55.145627   64543 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:19:55.145719   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.148502   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.148866   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.148891   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.149082   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.149272   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.149399   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.149528   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.149637   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:55.149785   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:55.149796   64543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:19:55.374395   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:19:55.374422   64543 machine.go:96] duration metric: took 862.73711ms to provisionDockerMachine
	I0920 22:19:55.374436   64543 start.go:293] postStartSetup for "embed-certs-562750" (driver="kvm2")
	I0920 22:19:55.374448   64543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:19:55.374463   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.374798   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:19:55.374815   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.377262   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.377649   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.377671   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.377856   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.378054   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.378203   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.378327   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.462119   64543 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:19:55.466515   64543 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:19:55.466544   64543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:19:55.466621   64543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:19:55.466720   64543 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:19:55.466846   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:19:55.476703   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:19:55.501943   64543 start.go:296] duration metric: took 127.489738ms for postStartSetup
	I0920 22:19:55.501987   64543 fix.go:56] duration metric: took 19.661445548s for fixHost
	I0920 22:19:55.502009   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.505054   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.505394   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.505420   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.505606   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.505826   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.505969   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.506160   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.506352   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:55.506532   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:55.506541   64543 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:19:55.615752   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870795.575298210
	
	I0920 22:19:55.615773   64543 fix.go:216] guest clock: 1726870795.575298210
	I0920 22:19:55.615781   64543 fix.go:229] Guest: 2024-09-20 22:19:55.57529821 +0000 UTC Remote: 2024-09-20 22:19:55.501991982 +0000 UTC m=+248.100299917 (delta=73.306228ms)
	I0920 22:19:55.615805   64543 fix.go:200] guest clock delta is within tolerance: 73.306228ms
	I0920 22:19:55.615812   64543 start.go:83] releasing machines lock for "embed-certs-562750", held for 19.7753011s
	I0920 22:19:55.615850   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.616131   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:55.618697   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.619117   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.619144   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.619347   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.619859   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.620028   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.620110   64543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:19:55.620162   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.620261   64543 ssh_runner.go:195] Run: cat /version.json
	I0920 22:19:55.620287   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.622796   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623204   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623233   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.623254   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623440   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.623601   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.623660   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.623681   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623769   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.623880   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.623941   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.624046   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.624175   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.624291   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.731482   64543 ssh_runner.go:195] Run: systemctl --version
	I0920 22:19:55.738176   64543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:19:55.887545   64543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:19:55.894532   64543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:19:55.894610   64543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:19:55.911070   64543 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:19:55.911097   64543 start.go:495] detecting cgroup driver to use...
	I0920 22:19:55.911178   64543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:19:55.928676   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:19:55.943462   64543 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:19:55.943543   64543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:19:55.960910   64543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:19:55.978109   64543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:19:56.101434   64543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:19:56.238737   64543 docker.go:233] disabling docker service ...
	I0920 22:19:56.238814   64543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:19:56.254099   64543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:19:56.268926   64543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:19:56.417839   64543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:19:56.548769   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:19:56.563311   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:19:56.583073   64543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:19:56.583134   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.598212   64543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:19:56.598287   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.610151   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.623499   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.637734   64543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:19:56.649751   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.665619   64543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.689725   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.704698   64543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:19:56.716205   64543 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:19:56.716276   64543 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:19:56.734247   64543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:19:56.745725   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:19:56.879084   64543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:19:56.978332   64543 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:19:56.978400   64543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:19:56.985047   64543 start.go:563] Will wait 60s for crictl version
	I0920 22:19:56.985112   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:19:56.989231   64543 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:19:57.029037   64543 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:19:57.029126   64543 ssh_runner.go:195] Run: crio --version
	I0920 22:19:57.060838   64543 ssh_runner.go:195] Run: crio --version
	I0920 22:19:57.092838   64543 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:19:57.094132   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:57.097205   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:57.097597   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:57.097642   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:57.097865   64543 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 22:19:57.102433   64543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:19:57.116015   64543 kubeadm.go:883] updating cluster {Name:embed-certs-562750 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:19:57.116134   64543 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:19:57.116197   64543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:19:57.156317   64543 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:19:57.156380   64543 ssh_runner.go:195] Run: which lz4
	I0920 22:19:57.160502   64543 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:19:57.164982   64543 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:19:57.165024   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 22:19:55.643600   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Start
	I0920 22:19:55.643811   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring networks are active...
	I0920 22:19:55.644651   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring network default is active
	I0920 22:19:55.644941   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring network mk-default-k8s-diff-port-799783 is active
	I0920 22:19:55.645308   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Getting domain xml...
	I0920 22:19:55.645897   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Creating domain...
	I0920 22:19:56.961175   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting to get IP...
	I0920 22:19:56.961986   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:56.962442   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:56.962522   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:56.962429   66271 retry.go:31] will retry after 219.744487ms: waiting for machine to come up
	I0920 22:19:57.184180   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.184718   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.184751   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.184663   66271 retry.go:31] will retry after 331.58082ms: waiting for machine to come up
	I0920 22:19:57.518314   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.518907   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.518945   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.518878   66271 retry.go:31] will retry after 308.110588ms: waiting for machine to come up
	I0920 22:19:57.829213   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.829754   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.829782   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.829711   66271 retry.go:31] will retry after 421.186303ms: waiting for machine to come up
	I0920 22:19:58.252218   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.252662   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.252689   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:58.252618   66271 retry.go:31] will retry after 465.325985ms: waiting for machine to come up
	I0920 22:19:58.719232   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.719741   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.719769   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:58.719679   66271 retry.go:31] will retry after 865.976725ms: waiting for machine to come up
	I0920 22:19:59.587325   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:59.587694   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:59.587725   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:59.587654   66271 retry.go:31] will retry after 1.114189927s: waiting for machine to come up
	I0920 22:19:58.663210   64543 crio.go:462] duration metric: took 1.502738525s to copy over tarball
	I0920 22:19:58.663299   64543 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:00.798877   64543 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.135542137s)
	I0920 22:20:00.798912   64543 crio.go:469] duration metric: took 2.135668164s to extract the tarball
	I0920 22:20:00.798922   64543 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:00.838877   64543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:00.884361   64543 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:20:00.884383   64543 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:20:00.884391   64543 kubeadm.go:934] updating node { 192.168.50.193 8443 v1.31.1 crio true true} ...
	I0920 22:20:00.884492   64543 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-562750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:00.884559   64543 ssh_runner.go:195] Run: crio config
	I0920 22:20:00.930933   64543 cni.go:84] Creating CNI manager for ""
	I0920 22:20:00.930987   64543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:00.931000   64543 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:00.931026   64543 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.193 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-562750 NodeName:embed-certs-562750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:20:00.931179   64543 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-562750"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:00.931237   64543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:20:00.941926   64543 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:00.942020   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:00.952488   64543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0920 22:20:00.970386   64543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:00.987879   64543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0920 22:20:01.006743   64543 ssh_runner.go:195] Run: grep 192.168.50.193	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:01.010865   64543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:01.023730   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:01.151649   64543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:01.169768   64543 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750 for IP: 192.168.50.193
	I0920 22:20:01.169804   64543 certs.go:194] generating shared ca certs ...
	I0920 22:20:01.169823   64543 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:01.170011   64543 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:01.170073   64543 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:01.170083   64543 certs.go:256] generating profile certs ...
	I0920 22:20:01.170195   64543 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/client.key
	I0920 22:20:01.170276   64543 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.key.1624b01a
	I0920 22:20:01.170322   64543 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.key
	I0920 22:20:01.170470   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:01.170511   64543 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:01.170524   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:01.170563   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:01.170597   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:01.170625   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:01.170675   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:01.171652   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:01.235898   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:01.276664   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:01.313610   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:01.346451   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 22:20:01.373805   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:20:01.401908   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:01.428305   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:01.460649   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:01.487664   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:01.513122   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:01.539736   64543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:01.559568   64543 ssh_runner.go:195] Run: openssl version
	I0920 22:20:01.565838   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:01.579013   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.584012   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.584090   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.590609   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:01.603863   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:01.616932   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.621762   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.621816   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.627823   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:01.640444   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:01.654236   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.659030   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.659087   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.665104   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:01.678337   64543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:01.683177   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:01.689492   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:01.695839   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:01.702517   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:01.709021   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:01.715624   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:01.721888   64543 kubeadm.go:392] StartCluster: {Name:embed-certs-562750 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:01.721981   64543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:01.722029   64543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:01.771458   64543 cri.go:89] found id: ""
	I0920 22:20:01.771526   64543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:01.785814   64543 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:01.785833   64543 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:01.785887   64543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:01.796192   64543 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:01.797197   64543 kubeconfig.go:125] found "embed-certs-562750" server: "https://192.168.50.193:8443"
	I0920 22:20:01.799185   64543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:01.809313   64543 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.193
	I0920 22:20:01.809345   64543 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:01.809373   64543 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:01.809437   64543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:01.846530   64543 cri.go:89] found id: ""
	I0920 22:20:01.846605   64543 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:01.864522   64543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:01.874792   64543 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:01.874817   64543 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:01.874877   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:20:01.884330   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:01.884401   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:01.893899   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:20:01.903119   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:01.903176   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:01.913163   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:20:01.924161   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:01.924216   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:01.935472   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:20:01.944837   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:01.944916   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:01.955372   64543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:01.966120   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:02.094557   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:00.703843   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:00.704393   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:00.704421   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:00.704320   66271 retry.go:31] will retry after 1.075959086s: waiting for machine to come up
	I0920 22:20:01.781352   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:01.781785   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:01.781815   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:01.781745   66271 retry.go:31] will retry after 1.355153165s: waiting for machine to come up
	I0920 22:20:03.139404   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:03.139900   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:03.139926   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:03.139858   66271 retry.go:31] will retry after 1.818194289s: waiting for machine to come up
	I0920 22:20:04.959670   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:04.960173   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:04.960242   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:04.960146   66271 retry.go:31] will retry after 2.18132962s: waiting for machine to come up
	I0920 22:20:03.143089   64543 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.048498511s)
	I0920 22:20:03.143125   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.364380   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.431644   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.547063   64543 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:03.547154   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.047320   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.548259   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.577405   64543 api_server.go:72] duration metric: took 1.03034068s to wait for apiserver process to appear ...
	I0920 22:20:04.577433   64543 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:20:04.577456   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.359356   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:07.359397   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:07.359415   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.433696   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:07.433729   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:07.578317   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.605834   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:07.605869   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:08.078218   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:08.083148   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:08.083174   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:08.577638   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:08.583009   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:08.583039   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:09.077549   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:09.081990   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 200:
	ok
	I0920 22:20:09.088232   64543 api_server.go:141] control plane version: v1.31.1
	I0920 22:20:09.088256   64543 api_server.go:131] duration metric: took 4.510816425s to wait for apiserver health ...
	I0920 22:20:09.088263   64543 cni.go:84] Creating CNI manager for ""
	I0920 22:20:09.088271   64543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:09.090388   64543 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:20:07.143268   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:07.143767   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:07.143806   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:07.143720   66271 retry.go:31] will retry after 3.514856392s: waiting for machine to come up
	I0920 22:20:09.091750   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:20:09.103171   64543 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:20:09.123617   64543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:20:09.135750   64543 system_pods.go:59] 8 kube-system pods found
	I0920 22:20:09.135778   64543 system_pods.go:61] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:20:09.135790   64543 system_pods.go:61] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:20:09.135800   64543 system_pods.go:61] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:20:09.135809   64543 system_pods.go:61] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:20:09.135815   64543 system_pods.go:61] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:20:09.135838   64543 system_pods.go:61] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:20:09.135843   64543 system_pods.go:61] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:20:09.135851   64543 system_pods.go:61] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:20:09.135864   64543 system_pods.go:74] duration metric: took 12.228085ms to wait for pod list to return data ...
	I0920 22:20:09.135870   64543 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:20:09.140570   64543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:20:09.140598   64543 node_conditions.go:123] node cpu capacity is 2
	I0920 22:20:09.140609   64543 node_conditions.go:105] duration metric: took 4.732339ms to run NodePressure ...
	I0920 22:20:09.140625   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:09.411017   64543 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:20:09.423634   64543 kubeadm.go:739] kubelet initialised
	I0920 22:20:09.423657   64543 kubeadm.go:740] duration metric: took 12.617105ms waiting for restarted kubelet to initialise ...
	I0920 22:20:09.423665   64543 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:09.427934   64543 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.438355   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.438377   64543 pod_ready.go:82] duration metric: took 10.418416ms for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.438386   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.438403   64543 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.447837   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "etcd-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.447868   64543 pod_ready.go:82] duration metric: took 9.456346ms for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.447877   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "etcd-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.447886   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.456304   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.456328   64543 pod_ready.go:82] duration metric: took 8.434885ms for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.456338   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.456344   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.528428   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.528451   64543 pod_ready.go:82] duration metric: took 72.099227ms for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.528460   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.528465   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.927405   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-proxy-wsrnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.927434   64543 pod_ready.go:82] duration metric: took 398.961394ms for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.927443   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-proxy-wsrnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.927449   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:10.327103   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.327128   64543 pod_ready.go:82] duration metric: took 399.672325ms for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:10.327140   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.327153   64543 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:10.727093   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.727121   64543 pod_ready.go:82] duration metric: took 399.958241ms for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:10.727130   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.727137   64543 pod_ready.go:39] duration metric: took 1.303464561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:10.727154   64543 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:20:10.741332   64543 ops.go:34] apiserver oom_adj: -16
	I0920 22:20:10.741356   64543 kubeadm.go:597] duration metric: took 8.955512402s to restartPrimaryControlPlane
	I0920 22:20:10.741367   64543 kubeadm.go:394] duration metric: took 9.019487096s to StartCluster
	I0920 22:20:10.741382   64543 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:10.741459   64543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:20:10.744903   64543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:10.745155   64543 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:20:10.745198   64543 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:20:10.745344   64543 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-562750"
	I0920 22:20:10.745366   64543 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-562750"
	W0920 22:20:10.745375   64543 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:20:10.745376   64543 addons.go:69] Setting metrics-server=true in profile "embed-certs-562750"
	I0920 22:20:10.745398   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.745397   64543 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:10.745409   64543 addons.go:234] Setting addon metrics-server=true in "embed-certs-562750"
	W0920 22:20:10.745419   64543 addons.go:243] addon metrics-server should already be in state true
	I0920 22:20:10.745344   64543 addons.go:69] Setting default-storageclass=true in profile "embed-certs-562750"
	I0920 22:20:10.745451   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.745468   64543 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-562750"
	I0920 22:20:10.745705   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745734   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745773   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.745853   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745943   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.746025   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.747255   64543 out.go:177] * Verifying Kubernetes components...
	I0920 22:20:10.749042   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:10.761119   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0920 22:20:10.761143   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0920 22:20:10.761143   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0920 22:20:10.761517   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.761598   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.761643   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.762025   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762040   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762150   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762157   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762169   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762172   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762352   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762535   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762545   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762611   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.763136   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.763137   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.763182   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.763208   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.766161   64543 addons.go:234] Setting addon default-storageclass=true in "embed-certs-562750"
	W0920 22:20:10.766184   64543 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:20:10.766215   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.766587   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.766645   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.779678   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
	I0920 22:20:10.780153   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.780716   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.780739   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.781116   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.781184   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45417
	I0920 22:20:10.781406   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.781548   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.782023   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.782037   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.782492   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.782496   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0920 22:20:10.782988   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.783141   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.783174   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.783312   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.783458   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.783479   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.783861   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.784058   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.785625   64543 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:10.785682   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.787202   64543 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:20:10.787263   64543 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:20:10.787276   64543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:20:10.787293   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.788800   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:20:10.788820   64543 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:20:10.788838   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.790560   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.791094   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.791126   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.791400   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.791591   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.791735   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.791873   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.792691   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.793146   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.793161   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.793440   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.793640   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.793776   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.793904   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.801231   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36363
	I0920 22:20:10.801612   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.802055   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.802078   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.802442   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.802720   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.804333   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.804544   64543 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:20:10.804561   64543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:20:10.804574   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.807883   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.808403   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.808425   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.808636   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.808809   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.808949   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.809102   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.974334   64543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:10.997919   64543 node_ready.go:35] waiting up to 6m0s for node "embed-certs-562750" to be "Ready" ...
	I0920 22:20:11.061767   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:20:11.079795   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:20:11.079817   64543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:20:11.097101   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:20:11.144947   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:20:11.144971   64543 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:20:11.217680   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:20:11.217704   64543 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:20:11.269307   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:20:12.132770   64543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070956267s)
	I0920 22:20:12.132827   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.132840   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.132840   64543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.035704985s)
	I0920 22:20:12.132882   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.132898   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133151   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133183   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133202   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133217   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.133221   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133225   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133248   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133262   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133285   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.133296   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133440   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133456   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133466   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133523   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133525   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133534   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.139359   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.139373   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.139612   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.139628   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.139643   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177345   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.177367   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.177669   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177689   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.177705   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.177715   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.177724   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.177933   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177974   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.177984   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.177998   64543 addons.go:475] Verifying addon metrics-server=true in "embed-certs-562750"
	I0920 22:20:12.180228   64543 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 22:20:12.181532   64543 addons.go:510] duration metric: took 1.436346895s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 22:20:15.124170   65164 start.go:364] duration metric: took 3m9.152575828s to acquireMachinesLock for "old-k8s-version-593412"
	I0920 22:20:15.124246   65164 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:20:15.124258   65164 fix.go:54] fixHost starting: 
	I0920 22:20:15.124618   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:15.124669   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:15.144438   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0920 22:20:15.144935   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:15.145417   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:20:15.145446   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:15.145757   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:15.145943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:15.146103   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetState
	I0920 22:20:15.147740   65164 fix.go:112] recreateIfNeeded on old-k8s-version-593412: state=Stopped err=<nil>
	I0920 22:20:15.147770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	W0920 22:20:15.147919   65164 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:20:15.150120   65164 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-593412" ...
	I0920 22:20:10.659920   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:10.660290   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:10.660324   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:10.660245   66271 retry.go:31] will retry after 3.009712634s: waiting for machine to come up
	I0920 22:20:13.672538   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.673101   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has current primary IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.673132   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Found IP for machine: 192.168.39.94
	I0920 22:20:13.673144   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Reserving static IP address...
	I0920 22:20:13.673601   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799783", mac: "52:54:00:30:89:df", ip: "192.168.39.94"} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.673657   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | skip adding static IP to network mk-default-k8s-diff-port-799783 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799783", mac: "52:54:00:30:89:df", ip: "192.168.39.94"}
	I0920 22:20:13.673673   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Reserved static IP address: 192.168.39.94
	I0920 22:20:13.673692   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for SSH to be available...
	I0920 22:20:13.673706   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Getting to WaitForSSH function...
	I0920 22:20:13.675591   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.675941   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.675961   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.676120   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Using SSH client type: external
	I0920 22:20:13.676150   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa (-rw-------)
	I0920 22:20:13.676183   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:13.676199   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | About to run SSH command:
	I0920 22:20:13.676212   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | exit 0
	I0920 22:20:13.803369   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:13.803683   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetConfigRaw
	I0920 22:20:13.804303   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:13.806544   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.806902   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.806932   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.807178   64930 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/config.json ...
	I0920 22:20:13.807442   64930 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:13.807465   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:13.807682   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:13.809783   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.810027   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.810049   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.810221   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:13.810472   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.810628   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.810727   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:13.810875   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:13.811088   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:13.811112   64930 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:13.923218   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:13.923249   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:13.923489   64930 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799783"
	I0920 22:20:13.923512   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:13.923657   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:13.926007   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.926408   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.926439   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.926613   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:13.926775   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.926915   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.927071   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:13.927227   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:13.927389   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:13.927401   64930 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799783 && echo "default-k8s-diff-port-799783" | sudo tee /etc/hostname
	I0920 22:20:14.058656   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799783
	
	I0920 22:20:14.058686   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.061487   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.061914   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.061950   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.062157   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.062352   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.062515   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.062675   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.062840   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:14.063061   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:14.063087   64930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799783/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:14.190755   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:14.190785   64930 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:14.190818   64930 buildroot.go:174] setting up certificates
	I0920 22:20:14.190831   64930 provision.go:84] configureAuth start
	I0920 22:20:14.190848   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:14.191114   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:14.194083   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.194504   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.194533   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.194705   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.197104   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.197415   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.197453   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.197649   64930 provision.go:143] copyHostCerts
	I0920 22:20:14.197710   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:14.197733   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:14.197797   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:14.197911   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:14.197924   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:14.197952   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:14.198025   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:14.198035   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:14.198062   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:14.198128   64930 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799783 san=[127.0.0.1 192.168.39.94 default-k8s-diff-port-799783 localhost minikube]
	I0920 22:20:14.450887   64930 provision.go:177] copyRemoteCerts
	I0920 22:20:14.450953   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:14.451010   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.453547   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.453849   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.453873   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.454064   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.454256   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.454424   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.454547   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:14.541380   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:14.566560   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 22:20:14.596742   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:20:14.629406   64930 provision.go:87] duration metric: took 438.556575ms to configureAuth
	I0920 22:20:14.629438   64930 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:14.629656   64930 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:14.629731   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.632710   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.633129   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.633160   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.633403   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.633638   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.633823   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.633968   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.634151   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:14.634380   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:14.634404   64930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:14.870302   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:14.870327   64930 machine.go:96] duration metric: took 1.062871025s to provisionDockerMachine
	I0920 22:20:14.870338   64930 start.go:293] postStartSetup for "default-k8s-diff-port-799783" (driver="kvm2")
	I0920 22:20:14.870348   64930 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:14.870363   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:14.870675   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:14.870703   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.873574   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.873946   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.873990   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.874110   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.874306   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.874460   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.874604   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:14.963054   64930 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:14.967939   64930 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:14.967973   64930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:14.968056   64930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:14.968155   64930 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:14.968292   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:14.978976   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:15.005989   64930 start.go:296] duration metric: took 135.637475ms for postStartSetup
	I0920 22:20:15.006028   64930 fix.go:56] duration metric: took 19.390035006s for fixHost
	I0920 22:20:15.006046   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.008572   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.008972   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.009003   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.009176   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.009355   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.009516   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.009637   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.009793   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:15.009948   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:15.009958   64930 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:15.124032   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870815.095821045
	
	I0920 22:20:15.124057   64930 fix.go:216] guest clock: 1726870815.095821045
	I0920 22:20:15.124067   64930 fix.go:229] Guest: 2024-09-20 22:20:15.095821045 +0000 UTC Remote: 2024-09-20 22:20:15.006032337 +0000 UTC m=+209.748283721 (delta=89.788708ms)
	I0920 22:20:15.124093   64930 fix.go:200] guest clock delta is within tolerance: 89.788708ms
	I0920 22:20:15.124099   64930 start.go:83] releasing machines lock for "default-k8s-diff-port-799783", held for 19.50814158s
	I0920 22:20:15.124126   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.124385   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:15.127159   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.127522   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.127553   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.127724   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128290   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128438   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128550   64930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:15.128593   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.128689   64930 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:15.128707   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.131485   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.131862   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.131934   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.131963   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.132143   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.132343   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.132377   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.132418   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.132561   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.132624   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.132703   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.132789   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:15.132812   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.132967   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:15.239910   64930 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:15.247781   64930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:15.151593   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .Start
	I0920 22:20:15.151807   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring networks are active...
	I0920 22:20:15.152666   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network default is active
	I0920 22:20:15.153017   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network mk-old-k8s-version-593412 is active
	I0920 22:20:15.153520   65164 main.go:141] libmachine: (old-k8s-version-593412) Getting domain xml...
	I0920 22:20:15.154306   65164 main.go:141] libmachine: (old-k8s-version-593412) Creating domain...
	I0920 22:20:15.405701   64930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:15.413454   64930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:15.413546   64930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:15.431606   64930 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:15.431631   64930 start.go:495] detecting cgroup driver to use...
	I0920 22:20:15.431698   64930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:15.450077   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:15.464156   64930 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:15.464229   64930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:15.483489   64930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:15.498873   64930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:15.614354   64930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:15.817412   64930 docker.go:233] disabling docker service ...
	I0920 22:20:15.817495   64930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:15.833184   64930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:15.849521   64930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:15.997696   64930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:16.138329   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:16.153302   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:16.172896   64930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:20:16.172967   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.184383   64930 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:16.184448   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.196869   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.208295   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.220098   64930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:16.231777   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.243180   64930 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.263881   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.275250   64930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:16.291764   64930 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:16.291839   64930 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:16.307082   64930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:16.318243   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:16.459321   64930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:16.563304   64930 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:16.563369   64930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:16.568520   64930 start.go:563] Will wait 60s for crictl version
	I0920 22:20:16.568586   64930 ssh_runner.go:195] Run: which crictl
	I0920 22:20:16.573573   64930 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:16.617596   64930 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:16.617692   64930 ssh_runner.go:195] Run: crio --version
	I0920 22:20:16.656700   64930 ssh_runner.go:195] Run: crio --version
	I0920 22:20:16.691226   64930 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:20:13.001784   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:15.502542   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:16.692645   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:16.695792   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:16.696162   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:16.696189   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:16.696457   64930 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:16.700827   64930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:16.713577   64930 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-799783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:16.713724   64930 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:20:16.713798   64930 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:16.753538   64930 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:20:16.753621   64930 ssh_runner.go:195] Run: which lz4
	I0920 22:20:16.758261   64930 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:20:16.763234   64930 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:20:16.763282   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 22:20:18.277256   64930 crio.go:462] duration metric: took 1.519010373s to copy over tarball
	I0920 22:20:18.277326   64930 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:16.512533   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting to get IP...
	I0920 22:20:16.513609   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.514030   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.514105   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.514023   66497 retry.go:31] will retry after 283.450738ms: waiting for machine to come up
	I0920 22:20:16.799590   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.800093   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.800125   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.800032   66497 retry.go:31] will retry after 325.332719ms: waiting for machine to come up
	I0920 22:20:17.127313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.127784   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.127809   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.127762   66497 retry.go:31] will retry after 407.300726ms: waiting for machine to come up
	I0920 22:20:17.537352   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.537853   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.537884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.537804   66497 retry.go:31] will retry after 376.483945ms: waiting for machine to come up
	I0920 22:20:17.916227   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.916679   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.916706   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.916646   66497 retry.go:31] will retry after 471.842767ms: waiting for machine to come up
	I0920 22:20:18.390326   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:18.390817   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:18.390862   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:18.390788   66497 retry.go:31] will retry after 897.259517ms: waiting for machine to come up
	I0920 22:20:19.289834   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:19.290347   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:19.290376   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:19.290289   66497 retry.go:31] will retry after 1.172691106s: waiting for machine to come up
	I0920 22:20:20.464050   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:20.464487   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:20.464509   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:20.464443   66497 retry.go:31] will retry after 1.205371228s: waiting for machine to come up
	I0920 22:20:17.502934   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:18.003595   64543 node_ready.go:49] node "embed-certs-562750" has status "Ready":"True"
	I0920 22:20:18.003623   64543 node_ready.go:38] duration metric: took 7.005667163s for node "embed-certs-562750" to be "Ready" ...
	I0920 22:20:18.003635   64543 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:18.018682   64543 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:20.027253   64543 pod_ready.go:103] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:20.404573   64930 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127218766s)
	I0920 22:20:20.404602   64930 crio.go:469] duration metric: took 2.127314313s to extract the tarball
	I0920 22:20:20.404613   64930 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:20.443060   64930 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:20.486174   64930 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:20:20.486200   64930 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:20:20.486210   64930 kubeadm.go:934] updating node { 192.168.39.94 8444 v1.31.1 crio true true} ...
	I0920 22:20:20.486327   64930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-799783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:20.486422   64930 ssh_runner.go:195] Run: crio config
	I0920 22:20:20.539117   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:20:20.539141   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:20.539151   64930 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:20.539184   64930 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-799783 NodeName:default-k8s-diff-port-799783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:20:20.539375   64930 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-799783"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:20.539450   64930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:20:20.552286   64930 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:20.552358   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:20.563964   64930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0920 22:20:20.582857   64930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:20.601602   64930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0920 22:20:20.619624   64930 ssh_runner.go:195] Run: grep 192.168.39.94	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:20.623667   64930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:20.636950   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:20.758158   64930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:20.776263   64930 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783 for IP: 192.168.39.94
	I0920 22:20:20.776291   64930 certs.go:194] generating shared ca certs ...
	I0920 22:20:20.776307   64930 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:20.776465   64930 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:20.776529   64930 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:20.776543   64930 certs.go:256] generating profile certs ...
	I0920 22:20:20.776644   64930 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.key
	I0920 22:20:20.776722   64930 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.key.a6c75af2
	I0920 22:20:20.776787   64930 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.key
	I0920 22:20:20.776952   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:20.776999   64930 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:20.777013   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:20.777050   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:20.777083   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:20.777121   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:20.777174   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:20.777796   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:20.829458   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:20.879008   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:20.914899   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:20.945905   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 22:20:20.981144   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:20:21.011815   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:21.037214   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:21.062338   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:21.088458   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:21.115033   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:21.138611   64930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:21.156547   64930 ssh_runner.go:195] Run: openssl version
	I0920 22:20:21.162710   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:21.175725   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.180671   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.180733   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.186897   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:21.199980   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:21.212840   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.217656   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.217710   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.223561   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:21.236627   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:21.249706   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.254627   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.254693   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.260555   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:21.272330   64930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:21.277146   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:21.283637   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:21.289555   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:21.296395   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:21.302879   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:21.309163   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:21.315778   64930 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-799783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:21.315890   64930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:21.315953   64930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:21.358833   64930 cri.go:89] found id: ""
	I0920 22:20:21.358895   64930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:21.370044   64930 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:21.370067   64930 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:21.370110   64930 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:21.380782   64930 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:21.381858   64930 kubeconfig.go:125] found "default-k8s-diff-port-799783" server: "https://192.168.39.94:8444"
	I0920 22:20:21.383864   64930 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:21.394623   64930 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.94
	I0920 22:20:21.394659   64930 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:21.394692   64930 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:21.394781   64930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:21.434763   64930 cri.go:89] found id: ""
	I0920 22:20:21.434837   64930 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:21.452364   64930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:21.463047   64930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:21.463069   64930 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:21.463120   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 22:20:21.473382   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:21.473435   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:21.483948   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 22:20:21.494051   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:21.494116   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:21.504392   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 22:20:21.514513   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:21.514586   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:21.525767   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 22:20:21.535959   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:21.536042   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:21.546544   64930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:21.559961   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:21.700177   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.039526   64930 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.339307973s)
	I0920 22:20:23.039563   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.240821   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.322946   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.419008   64930 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:23.419163   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:23.919775   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:24.419574   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:24.458386   64930 api_server.go:72] duration metric: took 1.039374058s to wait for apiserver process to appear ...
	I0920 22:20:24.458431   64930 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:20:24.458457   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:24.459045   64930 api_server.go:269] stopped: https://192.168.39.94:8444/healthz: Get "https://192.168.39.94:8444/healthz": dial tcp 192.168.39.94:8444: connect: connection refused
	I0920 22:20:24.958794   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:21.671904   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:21.672382   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:21.672412   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:21.672336   66497 retry.go:31] will retry after 1.261816036s: waiting for machine to come up
	I0920 22:20:22.935576   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:22.936165   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:22.936194   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:22.936101   66497 retry.go:31] will retry after 2.160717711s: waiting for machine to come up
	I0920 22:20:25.098788   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:25.099242   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:25.099264   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:25.099201   66497 retry.go:31] will retry after 2.399420445s: waiting for machine to come up
	I0920 22:20:22.764833   64543 pod_ready.go:103] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:25.030167   64543 pod_ready.go:93] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.030195   64543 pod_ready.go:82] duration metric: took 7.011481892s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.030207   64543 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.039905   64543 pod_ready.go:93] pod "etcd-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.039987   64543 pod_ready.go:82] duration metric: took 9.771571ms for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.040012   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.049855   64543 pod_ready.go:93] pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.049880   64543 pod_ready.go:82] duration metric: took 9.849676ms for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.049891   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.056105   64543 pod_ready.go:93] pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.056128   64543 pod_ready.go:82] duration metric: took 6.228132ms for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.056139   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.061206   64543 pod_ready.go:93] pod "kube-proxy-wsrnw" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.061227   64543 pod_ready.go:82] duration metric: took 5.080531ms for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.061239   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.423902   64543 pod_ready.go:93] pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.423928   64543 pod_ready.go:82] duration metric: took 362.681392ms for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.423941   64543 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:27.432293   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:27.551115   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:27.551160   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:27.551182   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:27.582342   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:27.582378   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:27.958626   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:27.964667   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:27.964700   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:28.459321   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:28.463795   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:28.463829   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:28.959398   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:28.965916   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 200:
	ok
	I0920 22:20:28.978561   64930 api_server.go:141] control plane version: v1.31.1
	I0920 22:20:28.978588   64930 api_server.go:131] duration metric: took 4.520149944s to wait for apiserver health ...
	I0920 22:20:28.978597   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:20:28.978612   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:28.980359   64930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:20:28.981786   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:20:28.999040   64930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:20:29.044999   64930 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:20:29.061533   64930 system_pods.go:59] 8 kube-system pods found
	I0920 22:20:29.061578   64930 system_pods.go:61] "coredns-7c65d6cfc9-nl65c" [a5d6f5c6-aeef-4f0e-8a96-dcddc7d261d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:20:29.061592   64930 system_pods.go:61] "etcd-default-k8s-diff-port-799783" [e23bc34a-f559-434e-bc61-30f010efe60b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:20:29.061603   64930 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799783" [acbaa020-f0e3-4e35-a4ab-28de39620eee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:20:29.061610   64930 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799783" [4f08a0c8-9f92-44ba-a0ae-3f61f50bad54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:20:29.061620   64930 system_pods.go:61] "kube-proxy-mqkfs" [8fe12e1f-7fdc-4bfc-8d11-3ef08212c17c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 22:20:29.061633   64930 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799783" [8373aa94-7b16-4f5a-ab26-bf778e76cecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:20:29.061641   64930 system_pods.go:61] "metrics-server-6867b74b74-4xsdw" [663691e2-a038-4ba4-b0ee-d7e1e88b9c05] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:20:29.061652   64930 system_pods.go:61] "storage-provisioner" [95a6ef93-5fdd-465b-8d34-7b97d56af757] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 22:20:29.061662   64930 system_pods.go:74] duration metric: took 16.629791ms to wait for pod list to return data ...
	I0920 22:20:29.061674   64930 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:20:29.066297   64930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:20:29.066329   64930 node_conditions.go:123] node cpu capacity is 2
	I0920 22:20:29.066341   64930 node_conditions.go:105] duration metric: took 4.659117ms to run NodePressure ...
	I0920 22:20:29.066360   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:29.337598   64930 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:20:29.345606   64930 kubeadm.go:739] kubelet initialised
	I0920 22:20:29.345637   64930 kubeadm.go:740] duration metric: took 8.0125ms waiting for restarted kubelet to initialise ...
	I0920 22:20:29.345646   64930 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:29.353710   64930 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.359468   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.359507   64930 pod_ready.go:82] duration metric: took 5.763208ms for pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.359518   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.359529   64930 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.365960   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.365994   64930 pod_ready.go:82] duration metric: took 6.454066ms for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.366006   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.366023   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.372228   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.372258   64930 pod_ready.go:82] duration metric: took 6.225774ms for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.372270   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.372278   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.449937   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.449964   64930 pod_ready.go:82] duration metric: took 77.677417ms for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.449975   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.449981   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mqkfs" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.849933   64930 pod_ready.go:93] pod "kube-proxy-mqkfs" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:29.849957   64930 pod_ready.go:82] duration metric: took 399.969442ms for pod "kube-proxy-mqkfs" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.849971   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:27.500734   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:27.501210   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:27.501250   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:27.501187   66497 retry.go:31] will retry after 2.779302945s: waiting for machine to come up
	I0920 22:20:30.282179   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:30.282663   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:30.282693   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:30.282632   66497 retry.go:31] will retry after 3.57604454s: waiting for machine to come up
	I0920 22:20:29.930268   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:31.930604   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:35.104095   64076 start.go:364] duration metric: took 54.263062742s to acquireMachinesLock for "no-preload-032884"
	I0920 22:20:35.104165   64076 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:20:35.104176   64076 fix.go:54] fixHost starting: 
	I0920 22:20:35.104530   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:35.104559   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:35.121905   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0920 22:20:35.122295   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:35.122754   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:20:35.122775   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:35.123184   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:35.123393   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:35.123553   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:20:35.125092   64076 fix.go:112] recreateIfNeeded on no-preload-032884: state=Stopped err=<nil>
	I0920 22:20:35.125113   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	W0920 22:20:35.125267   64076 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:20:35.127162   64076 out.go:177] * Restarting existing kvm2 VM for "no-preload-032884" ...
	I0920 22:20:31.857961   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:34.358409   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:33.859962   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860480   65164 main.go:141] libmachine: (old-k8s-version-593412) Found IP for machine: 192.168.61.2
	I0920 22:20:33.860510   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has current primary IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860519   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserving static IP address...
	I0920 22:20:33.860912   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.860939   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | skip adding static IP to network mk-old-k8s-version-593412 - found existing host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"}
	I0920 22:20:33.860953   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserved static IP address: 192.168.61.2
	I0920 22:20:33.860970   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting for SSH to be available...
	I0920 22:20:33.860982   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Getting to WaitForSSH function...
	I0920 22:20:33.863316   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863665   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.863690   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863814   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH client type: external
	I0920 22:20:33.863852   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa (-rw-------)
	I0920 22:20:33.863884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:33.863898   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | About to run SSH command:
	I0920 22:20:33.863916   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | exit 0
	I0920 22:20:33.987115   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:33.987448   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetConfigRaw
	I0920 22:20:33.988170   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:33.990787   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991200   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.991232   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991450   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:20:33.991659   65164 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:33.991677   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:33.991880   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:33.994313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994683   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.994715   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994911   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:33.995094   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995194   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995344   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:33.995459   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:33.995641   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:33.995651   65164 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:34.104874   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:34.104900   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105125   65164 buildroot.go:166] provisioning hostname "old-k8s-version-593412"
	I0920 22:20:34.105140   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105271   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.108213   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108564   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.108595   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108754   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.108946   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109088   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109218   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.109384   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.109607   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.109624   65164 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-593412 && echo "old-k8s-version-593412" | sudo tee /etc/hostname
	I0920 22:20:34.236515   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-593412
	
	I0920 22:20:34.236547   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.239440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.239825   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239983   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.240211   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240372   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240534   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.240753   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.240978   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.241002   65164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-593412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-593412/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-593412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:34.364324   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:34.364351   65164 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:34.364401   65164 buildroot.go:174] setting up certificates
	I0920 22:20:34.364413   65164 provision.go:84] configureAuth start
	I0920 22:20:34.364426   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.364641   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:34.367650   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368065   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.368123   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368260   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.371062   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.371470   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371621   65164 provision.go:143] copyHostCerts
	I0920 22:20:34.371684   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:34.371699   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:34.371756   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:34.371859   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:34.371869   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:34.371889   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:34.371941   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:34.371949   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:34.371966   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:34.372024   65164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-593412 san=[127.0.0.1 192.168.61.2 localhost minikube old-k8s-version-593412]
	I0920 22:20:34.456453   65164 provision.go:177] copyRemoteCerts
	I0920 22:20:34.456512   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:34.456538   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.459471   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459810   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.459871   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459996   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.460225   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.460390   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.460538   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.547105   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:34.576231   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 22:20:34.603240   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 22:20:34.629295   65164 provision.go:87] duration metric: took 264.869688ms to configureAuth
	I0920 22:20:34.629320   65164 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:34.629521   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:20:34.629601   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.632577   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633006   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.633035   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633191   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.633391   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633552   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633682   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.633865   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.634029   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.634043   65164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:34.870369   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:34.870393   65164 machine.go:96] duration metric: took 878.72152ms to provisionDockerMachine
	I0920 22:20:34.870404   65164 start.go:293] postStartSetup for "old-k8s-version-593412" (driver="kvm2")
	I0920 22:20:34.870414   65164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:34.870446   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:34.870739   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:34.870770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.873626   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874069   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.874098   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874262   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.874477   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.874607   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.874731   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.958510   65164 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:34.962678   65164 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:34.962700   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:34.962769   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:34.962873   65164 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:34.963010   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:34.973114   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:34.997799   65164 start.go:296] duration metric: took 127.379618ms for postStartSetup
	I0920 22:20:34.997837   65164 fix.go:56] duration metric: took 19.87358027s for fixHost
	I0920 22:20:34.997861   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.000420   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000761   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.000786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000953   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.001142   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001332   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001492   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.001694   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:35.001900   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:35.001914   65164 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:35.103956   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870835.076325315
	
	I0920 22:20:35.103980   65164 fix.go:216] guest clock: 1726870835.076325315
	I0920 22:20:35.103991   65164 fix.go:229] Guest: 2024-09-20 22:20:35.076325315 +0000 UTC Remote: 2024-09-20 22:20:34.997841315 +0000 UTC m=+209.171100575 (delta=78.484ms)
	I0920 22:20:35.104021   65164 fix.go:200] guest clock delta is within tolerance: 78.484ms
	I0920 22:20:35.104031   65164 start.go:83] releasing machines lock for "old-k8s-version-593412", held for 19.979809523s
	I0920 22:20:35.104060   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.104327   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:35.107075   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107426   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.107463   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107619   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108087   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108221   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108323   65164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:35.108377   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.108430   65164 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:35.108455   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.111005   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111107   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111377   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111413   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111456   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111584   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111638   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111760   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111795   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111939   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.111943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.189592   65164 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:35.213527   65164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:35.364081   65164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:35.370551   65164 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:35.370621   65164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:35.388107   65164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:35.388135   65164 start.go:495] detecting cgroup driver to use...
	I0920 22:20:35.388206   65164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:35.410531   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:35.425842   65164 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:35.425902   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:35.441753   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:35.457160   65164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:35.578452   65164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:35.754567   65164 docker.go:233] disabling docker service ...
	I0920 22:20:35.754653   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:35.772241   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:35.788874   65164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:35.911696   65164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:36.050305   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:36.065694   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:36.085619   65164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 22:20:36.085697   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.096527   65164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:36.096599   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.108150   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.121282   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.133335   65164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:36.145549   65164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:36.156370   65164 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:36.156429   65164 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:36.174821   65164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:36.188579   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:36.355272   65164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:36.460222   65164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:36.460280   65164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:36.466413   65164 start.go:563] Will wait 60s for crictl version
	I0920 22:20:36.466470   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:36.476263   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:36.519969   65164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:36.520061   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.551914   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.584210   65164 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 22:20:33.931343   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:36.430359   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:35.128483   64076 main.go:141] libmachine: (no-preload-032884) Calling .Start
	I0920 22:20:35.128679   64076 main.go:141] libmachine: (no-preload-032884) Ensuring networks are active...
	I0920 22:20:35.129407   64076 main.go:141] libmachine: (no-preload-032884) Ensuring network default is active
	I0920 22:20:35.129723   64076 main.go:141] libmachine: (no-preload-032884) Ensuring network mk-no-preload-032884 is active
	I0920 22:20:35.130101   64076 main.go:141] libmachine: (no-preload-032884) Getting domain xml...
	I0920 22:20:35.130878   64076 main.go:141] libmachine: (no-preload-032884) Creating domain...
	I0920 22:20:36.466500   64076 main.go:141] libmachine: (no-preload-032884) Waiting to get IP...
	I0920 22:20:36.467609   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:36.468084   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:36.468162   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:36.468053   66666 retry.go:31] will retry after 296.142224ms: waiting for machine to come up
	I0920 22:20:36.765402   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:36.765985   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:36.766019   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:36.765907   66666 retry.go:31] will retry after 264.44324ms: waiting for machine to come up
	I0920 22:20:37.032449   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.032913   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.032983   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.032871   66666 retry.go:31] will retry after 366.351809ms: waiting for machine to come up
	I0920 22:20:37.401339   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.401912   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.401942   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.401867   66666 retry.go:31] will retry after 447.432413ms: waiting for machine to come up
	I0920 22:20:37.851659   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.852267   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.852292   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.852177   66666 retry.go:31] will retry after 547.737945ms: waiting for machine to come up
	I0920 22:20:36.858146   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:38.863304   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:36.585309   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:36.588169   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588579   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:36.588608   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588830   65164 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:36.593202   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:36.608192   65164 kubeadm.go:883] updating cluster {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:36.608325   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:20:36.608384   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:36.667481   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:36.667551   65164 ssh_runner.go:195] Run: which lz4
	I0920 22:20:36.672136   65164 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:20:36.677043   65164 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:20:36.677075   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 22:20:38.446678   65164 crio.go:462] duration metric: took 1.774616281s to copy over tarball
	I0920 22:20:38.446757   65164 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:38.433307   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:40.932109   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:38.401406   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:38.401845   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:38.401872   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:38.401813   66666 retry.go:31] will retry after 768.751475ms: waiting for machine to come up
	I0920 22:20:39.171763   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:39.172262   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:39.172306   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:39.172196   66666 retry.go:31] will retry after 743.411926ms: waiting for machine to come up
	I0920 22:20:39.917169   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:39.917764   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:39.917788   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:39.917713   66666 retry.go:31] will retry after 962.106927ms: waiting for machine to come up
	I0920 22:20:40.881256   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:40.881826   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:40.881853   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:40.881761   66666 retry.go:31] will retry after 1.153833019s: waiting for machine to come up
	I0920 22:20:42.036986   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:42.037484   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:42.037507   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:42.037435   66666 retry.go:31] will retry after 2.265359794s: waiting for machine to come up
	I0920 22:20:41.358624   64930 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:41.358656   64930 pod_ready.go:82] duration metric: took 11.508676383s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:41.358675   64930 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:43.367487   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:41.624103   65164 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177314792s)
	I0920 22:20:41.624140   65164 crio.go:469] duration metric: took 3.177430277s to extract the tarball
	I0920 22:20:41.624150   65164 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:41.682195   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:41.728925   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:41.728954   65164 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:20:41.729037   65164 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.729082   65164 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 22:20:41.729130   65164 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.729039   65164 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.729053   65164 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.729064   65164 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731145   65164 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.731168   65164 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 22:20:41.731417   65164 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731618   65164 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.731649   65164 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.731764   65164 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.731827   65164 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.731883   65164 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.894612   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.895771   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.903674   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.909467   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.910830   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.926607   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.943477   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 22:20:42.011191   65164 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 22:20:42.011242   65164 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.011318   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.050009   65164 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 22:20:42.050293   65164 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.050375   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.093205   65164 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 22:20:42.093259   65164 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.093317   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106609   65164 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 22:20:42.106652   65164 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.106701   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106747   65164 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 22:20:42.106832   65164 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.106881   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106699   65164 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 22:20:42.106936   65164 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.107028   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117272   65164 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 22:20:42.117381   65164 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 22:20:42.117404   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.117463   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.117496   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.117427   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117331   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.117556   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.117617   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.261915   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.262025   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.261930   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.262132   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.262186   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.262224   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.262299   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.416018   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.416073   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.416134   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.437099   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.437281   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.437389   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.437543   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.498530   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:42.617564   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 22:20:42.617565   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 22:20:42.617665   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.617767   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 22:20:42.629455   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 22:20:42.629553   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 22:20:42.629638   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 22:20:42.769494   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 22:20:42.769571   65164 cache_images.go:92] duration metric: took 1.040600304s to LoadCachedImages
	W0920 22:20:42.769670   65164 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0920 22:20:42.769690   65164 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.20.0 crio true true} ...
	I0920 22:20:42.769834   65164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-593412 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:42.769930   65164 ssh_runner.go:195] Run: crio config
	I0920 22:20:42.826657   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:20:42.826684   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:42.826695   65164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:42.826713   65164 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-593412 NodeName:old-k8s-version-593412 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 22:20:42.826912   65164 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-593412"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:42.827017   65164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 22:20:42.836906   65164 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:42.836972   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:42.846776   65164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0920 22:20:42.867946   65164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:42.888646   65164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0920 22:20:42.908587   65164 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:42.913022   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:42.927492   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:43.074355   65164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:43.093712   65164 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412 for IP: 192.168.61.2
	I0920 22:20:43.093747   65164 certs.go:194] generating shared ca certs ...
	I0920 22:20:43.093768   65164 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.093938   65164 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:43.094003   65164 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:43.094016   65164 certs.go:256] generating profile certs ...
	I0920 22:20:43.094116   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.key
	I0920 22:20:43.094174   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key.2074c1da
	I0920 22:20:43.094287   65164 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key
	I0920 22:20:43.094440   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:43.094482   65164 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:43.094503   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:43.094532   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:43.094556   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:43.094575   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:43.094615   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:43.095441   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:43.139315   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:43.184809   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:43.224959   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:43.261692   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 22:20:43.304910   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:20:43.350211   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:43.383294   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:43.419400   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:43.450334   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:43.483749   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:43.514957   65164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:43.539734   65164 ssh_runner.go:195] Run: openssl version
	I0920 22:20:43.548836   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:43.564773   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571841   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571904   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.580432   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:43.596782   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:43.613747   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620172   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620241   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.628316   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:43.644084   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:43.656062   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661620   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661687   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.668857   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:43.681747   65164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:43.686842   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:43.694119   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:43.701308   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:43.710152   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:43.718080   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:43.725573   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:43.733563   65164 kubeadm.go:392] StartCluster: {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:43.733676   65164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:43.733753   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.779321   65164 cri.go:89] found id: ""
	I0920 22:20:43.779414   65164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:43.792290   65164 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:43.792308   65164 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:43.792354   65164 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:43.804754   65164 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:43.805793   65164 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-593412" does not appear in /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:20:43.807347   65164 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-9614/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-593412" cluster setting kubeconfig missing "old-k8s-version-593412" context setting]
	I0920 22:20:43.809671   65164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.812323   65164 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:43.824013   65164 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I0920 22:20:43.824057   65164 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:43.824072   65164 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:43.824154   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.866570   65164 cri.go:89] found id: ""
	I0920 22:20:43.866647   65164 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:43.885113   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:43.895941   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:43.895964   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:43.896070   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:20:43.906350   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:43.906427   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:43.917506   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:20:43.927802   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:43.927873   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:43.938454   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.949394   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:43.949459   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.960239   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:20:43.971275   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:43.971343   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:43.981773   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:43.992415   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:44.128979   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.239452   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.110430813s)
	I0920 22:20:45.239495   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.508589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.628021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.731122   65164 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:45.731219   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:42.960716   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:45.431334   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:47.431981   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:44.306025   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:44.306388   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:44.306418   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:44.306362   66666 retry.go:31] will retry after 1.811972294s: waiting for machine to come up
	I0920 22:20:46.119688   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:46.120257   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:46.120281   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:46.120206   66666 retry.go:31] will retry after 2.353824362s: waiting for machine to come up
	I0920 22:20:45.865875   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:48.366477   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:46.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:46.732091   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.232177   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.731967   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.231957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.731560   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.232211   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.731851   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.732174   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.930717   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:51.931624   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:48.476137   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:48.476630   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:48.476659   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:48.476585   66666 retry.go:31] will retry after 3.920442887s: waiting for machine to come up
	I0920 22:20:52.400058   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:52.400449   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:52.400471   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:52.400426   66666 retry.go:31] will retry after 3.798500794s: waiting for machine to come up
	I0920 22:20:50.865475   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:53.366485   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:51.231352   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:51.731345   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.231745   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.731622   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.231749   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.231457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.732036   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.232322   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.732142   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.430801   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.931991   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.200060   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.200518   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has current primary IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.200541   64076 main.go:141] libmachine: (no-preload-032884) Found IP for machine: 192.168.72.19
	I0920 22:20:56.200550   64076 main.go:141] libmachine: (no-preload-032884) Reserving static IP address...
	I0920 22:20:56.200939   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "no-preload-032884", mac: "52:54:00:a6:6f:3c", ip: "192.168.72.19"} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.200983   64076 main.go:141] libmachine: (no-preload-032884) DBG | skip adding static IP to network mk-no-preload-032884 - found existing host DHCP lease matching {name: "no-preload-032884", mac: "52:54:00:a6:6f:3c", ip: "192.168.72.19"}
	I0920 22:20:56.200999   64076 main.go:141] libmachine: (no-preload-032884) Reserved static IP address: 192.168.72.19
	I0920 22:20:56.201014   64076 main.go:141] libmachine: (no-preload-032884) Waiting for SSH to be available...
	I0920 22:20:56.201042   64076 main.go:141] libmachine: (no-preload-032884) DBG | Getting to WaitForSSH function...
	I0920 22:20:56.203351   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.203663   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.203695   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.203770   64076 main.go:141] libmachine: (no-preload-032884) DBG | Using SSH client type: external
	I0920 22:20:56.203790   64076 main.go:141] libmachine: (no-preload-032884) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa (-rw-------)
	I0920 22:20:56.203820   64076 main.go:141] libmachine: (no-preload-032884) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:56.203830   64076 main.go:141] libmachine: (no-preload-032884) DBG | About to run SSH command:
	I0920 22:20:56.203844   64076 main.go:141] libmachine: (no-preload-032884) DBG | exit 0
	I0920 22:20:56.331498   64076 main.go:141] libmachine: (no-preload-032884) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:56.331905   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetConfigRaw
	I0920 22:20:56.333047   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:56.335830   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.336198   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.336230   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.336462   64076 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/config.json ...
	I0920 22:20:56.336687   64076 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:56.336708   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:56.336901   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.339784   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.340171   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.340198   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.340373   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.340598   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.340754   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.340901   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.341056   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.341274   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.341295   64076 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:56.451538   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:56.451567   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.451783   64076 buildroot.go:166] provisioning hostname "no-preload-032884"
	I0920 22:20:56.451813   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.451994   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.454378   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.454747   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.454777   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.454905   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.455093   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.455216   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.455344   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.455549   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.455768   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.455791   64076 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-032884 && echo "no-preload-032884" | sudo tee /etc/hostname
	I0920 22:20:56.577189   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-032884
	
	I0920 22:20:56.577218   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.580006   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.580396   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.580419   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.580637   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.580821   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.580981   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.581093   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.581236   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.581447   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.581472   64076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-032884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-032884/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-032884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:56.697182   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:56.697211   64076 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:56.697237   64076 buildroot.go:174] setting up certificates
	I0920 22:20:56.697250   64076 provision.go:84] configureAuth start
	I0920 22:20:56.697268   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.697530   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:56.700284   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.700638   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.700669   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.700828   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.703088   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.703460   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.703490   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.703557   64076 provision.go:143] copyHostCerts
	I0920 22:20:56.703638   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:56.703658   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:56.703729   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:56.703846   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:56.703856   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:56.703888   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:56.703976   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:56.703986   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:56.704014   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:56.704100   64076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.no-preload-032884 san=[127.0.0.1 192.168.72.19 localhost minikube no-preload-032884]
	I0920 22:20:56.866984   64076 provision.go:177] copyRemoteCerts
	I0920 22:20:56.867061   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:56.867092   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.869468   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.869727   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.869753   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.869967   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.870146   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.870335   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.870460   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:56.957699   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 22:20:56.983450   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:20:57.010074   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:57.036805   64076 provision.go:87] duration metric: took 339.53481ms to configureAuth
	I0920 22:20:57.036831   64076 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:57.036994   64076 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:57.037101   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.040022   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.040443   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.040474   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.040688   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.040870   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.041024   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.041155   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.041351   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:57.041575   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:57.041597   64076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:57.264274   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:57.264302   64076 machine.go:96] duration metric: took 927.599257ms to provisionDockerMachine
	I0920 22:20:57.264315   64076 start.go:293] postStartSetup for "no-preload-032884" (driver="kvm2")
	I0920 22:20:57.264328   64076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:57.264353   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.264656   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:57.264689   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.267783   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.268229   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.268267   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.268455   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.268639   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.268786   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.268927   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.353832   64076 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:57.358202   64076 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:57.358226   64076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:57.358289   64076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:57.358364   64076 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:57.358445   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:57.368387   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:57.393307   64076 start.go:296] duration metric: took 128.976035ms for postStartSetup
	I0920 22:20:57.393353   64076 fix.go:56] duration metric: took 22.289177312s for fixHost
	I0920 22:20:57.393377   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.396200   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.396566   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.396586   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.396760   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.396953   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.397106   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.397220   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.397357   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:57.397563   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:57.397578   64076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:57.508079   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870857.462716254
	
	I0920 22:20:57.508102   64076 fix.go:216] guest clock: 1726870857.462716254
	I0920 22:20:57.508111   64076 fix.go:229] Guest: 2024-09-20 22:20:57.462716254 +0000 UTC Remote: 2024-09-20 22:20:57.393358015 +0000 UTC m=+359.143310428 (delta=69.358239ms)
	I0920 22:20:57.508136   64076 fix.go:200] guest clock delta is within tolerance: 69.358239ms
	I0920 22:20:57.508143   64076 start.go:83] releasing machines lock for "no-preload-032884", held for 22.404002843s
	I0920 22:20:57.508179   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.508481   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:57.511316   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.511660   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.511685   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.511832   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512291   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512438   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512499   64076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:57.512558   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.512687   64076 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:57.512714   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.515494   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515674   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515884   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.515921   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515962   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.516002   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.516189   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.516287   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.516370   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.516459   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.516509   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.516578   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.516639   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.516667   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.596100   64076 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:57.621437   64076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:57.765920   64076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:57.772259   64076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:57.772330   64076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:57.790230   64076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:57.790257   64076 start.go:495] detecting cgroup driver to use...
	I0920 22:20:57.790341   64076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:57.808916   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:57.824120   64076 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:57.824175   64076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:57.838655   64076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:57.853621   64076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:57.966670   64076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:58.108386   64076 docker.go:233] disabling docker service ...
	I0920 22:20:58.108456   64076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:58.124306   64076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:58.138341   64076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:58.292771   64076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:58.415026   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:58.431905   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:58.451166   64076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:20:58.451227   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.462234   64076 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:58.462319   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.473548   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.484592   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.495498   64076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:58.506876   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.517776   64076 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.536223   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.548104   64076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:58.558182   64076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:58.558258   64076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:58.573018   64076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:58.584883   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:58.715520   64076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:58.810495   64076 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:58.810565   64076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:58.815745   64076 start.go:563] Will wait 60s for crictl version
	I0920 22:20:58.815808   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:58.819904   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:58.866227   64076 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:58.866299   64076 ssh_runner.go:195] Run: crio --version
	I0920 22:20:58.895897   64076 ssh_runner.go:195] Run: crio --version
	I0920 22:20:58.930268   64076 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:20:55.865946   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:58.365557   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.231551   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:56.732184   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.231519   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.732306   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.231766   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.232250   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.731295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.732178   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.431393   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:01.930337   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:58.931424   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:58.934010   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:58.934261   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:58.934296   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:58.934479   64076 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:58.938795   64076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:58.952249   64076 kubeadm.go:883] updating cluster {Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:58.952359   64076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:20:58.952390   64076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:58.990874   64076 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:20:58.990895   64076 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:20:58.991004   64076 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:58.991004   64076 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:58.991042   64076 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:58.991051   64076 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 22:20:58.991027   64076 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:58.991070   64076 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:58.991094   64076 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:58.991320   64076 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:58.992446   64076 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:58.992445   64076 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:58.992529   64076 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:58.992537   64076 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:58.992888   64076 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.160630   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.162022   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.175945   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.176874   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.206536   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.221391   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.233905   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 22:20:59.279521   64076 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 22:20:59.279570   64076 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.279592   64076 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 22:20:59.279618   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.279624   64076 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.279667   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.320755   64076 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 22:20:59.320802   64076 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.320850   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.321041   64076 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 22:20:59.321076   64076 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.321121   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.339294   64076 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 22:20:59.339346   64076 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.339406   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.368217   64076 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 22:20:59.368271   64076 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.368321   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.462251   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.462299   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.462353   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.462395   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.462450   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.462477   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.603415   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.603435   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.603559   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.603611   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.603653   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.603690   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.740308   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.740311   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.740391   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.756577   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.756670   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.756711   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.825056   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:59.886185   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 22:20:59.886247   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 22:20:59.886320   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.886335   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:20:59.886200   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 22:20:59.886437   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:20:59.905891   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 22:20:59.905934   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 22:20:59.906012   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:20:59.906032   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 22:20:59.906037   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:20:59.906105   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:20:59.944041   64076 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 22:20:59.944085   64076 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:59.944092   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 22:20:59.944109   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.944115   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 22:20:59.944124   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.944156   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.944192   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 22:20:59.944232   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 22:20:59.944290   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 22:20:59.944305   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 22:21:02.634084   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.689904652s)
	I0920 22:21:02.634115   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 22:21:02.634134   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:21:02.634156   64076 ssh_runner.go:235] Completed: which crictl: (2.690013191s)
	I0920 22:21:02.634170   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:21:02.634206   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:00.365987   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:02.366056   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:04.865138   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:01.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:01.731557   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.232222   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.732161   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.231472   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.732256   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.231599   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.731956   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.231815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.731307   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.931240   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:05.932280   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:04.514463   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880270825s)
	I0920 22:21:04.514501   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 22:21:04.514506   64076 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.880277859s)
	I0920 22:21:04.514634   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:04.514530   64076 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:21:04.514708   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:21:07.903361   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.388614598s)
	I0920 22:21:07.903407   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 22:21:07.903447   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:21:07.903457   64076 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.388799941s)
	I0920 22:21:07.903508   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:21:07.903525   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:07.948935   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 22:21:07.949066   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:06.867117   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:09.365054   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:06.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:06.732185   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.232008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.732202   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.232026   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.732223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.231619   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.232303   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.731940   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.432619   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:10.432916   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:09.377740   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.47420389s)
	I0920 22:21:09.377777   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 22:21:09.377799   64076 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:21:09.377799   64076 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.428713149s)
	I0920 22:21:09.377821   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 22:21:09.377855   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:21:11.449954   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.072068983s)
	I0920 22:21:11.449989   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 22:21:11.450021   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:21:11.450070   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:21:11.366565   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:13.865931   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:11.231381   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:11.731874   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.232048   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.732168   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.232175   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.732162   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.231648   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.732176   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.232078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.732039   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.931619   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:14.931872   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:17.431853   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:13.420242   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.970150355s)
	I0920 22:21:13.420269   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 22:21:13.420292   64076 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:13.420327   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:14.070854   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 22:21:14.070905   64076 cache_images.go:123] Successfully loaded all cached images
	I0920 22:21:14.070914   64076 cache_images.go:92] duration metric: took 15.080004062s to LoadCachedImages
	I0920 22:21:14.070930   64076 kubeadm.go:934] updating node { 192.168.72.19 8443 v1.31.1 crio true true} ...
	I0920 22:21:14.071097   64076 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-032884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:21:14.071168   64076 ssh_runner.go:195] Run: crio config
	I0920 22:21:14.124217   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:21:14.124245   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:21:14.124254   64076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:21:14.124274   64076 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.19 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-032884 NodeName:no-preload-032884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:21:14.124421   64076 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-032884"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:21:14.124480   64076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:21:14.136928   64076 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:21:14.136997   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:21:14.147175   64076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 22:21:14.164546   64076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:21:14.181818   64076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0920 22:21:14.199098   64076 ssh_runner.go:195] Run: grep 192.168.72.19	control-plane.minikube.internal$ /etc/hosts
	I0920 22:21:14.203042   64076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:21:14.216620   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:21:14.344665   64076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:21:14.365604   64076 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884 for IP: 192.168.72.19
	I0920 22:21:14.365629   64076 certs.go:194] generating shared ca certs ...
	I0920 22:21:14.365649   64076 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:21:14.365827   64076 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:21:14.365885   64076 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:21:14.365899   64076 certs.go:256] generating profile certs ...
	I0920 22:21:14.366012   64076 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.key
	I0920 22:21:14.366104   64076 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.key.bbadb61c
	I0920 22:21:14.366162   64076 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.key
	I0920 22:21:14.366353   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:21:14.366400   64076 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:21:14.366414   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:21:14.366450   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:21:14.366489   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:21:14.366519   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:21:14.366575   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:21:14.367591   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:21:14.427412   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:21:14.462466   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:21:14.501761   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:21:14.536382   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 22:21:14.571781   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:21:14.598326   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:21:14.623017   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:21:14.648918   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:21:14.675898   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:21:14.701225   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:21:14.726996   64076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:21:14.748808   64076 ssh_runner.go:195] Run: openssl version
	I0920 22:21:14.755400   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:21:14.768456   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.774027   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.774096   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.780817   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:21:14.796292   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:21:14.810475   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.815697   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.815773   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.822267   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:21:14.834264   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:21:14.846103   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.850719   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.850783   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.856898   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:21:14.868699   64076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:21:14.873403   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:21:14.879636   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:21:14.885839   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:21:14.892035   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:21:14.897881   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:21:14.903783   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:21:14.909649   64076 kubeadm.go:392] StartCluster: {Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:21:14.909737   64076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:21:14.909780   64076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:21:14.953105   64076 cri.go:89] found id: ""
	I0920 22:21:14.953180   64076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:21:14.963403   64076 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:21:14.963423   64076 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:21:14.963471   64076 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:21:14.973488   64076 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:21:14.974547   64076 kubeconfig.go:125] found "no-preload-032884" server: "https://192.168.72.19:8443"
	I0920 22:21:14.976821   64076 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:21:14.986722   64076 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.19
	I0920 22:21:14.986757   64076 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:21:14.986770   64076 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:21:14.986842   64076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:21:15.023234   64076 cri.go:89] found id: ""
	I0920 22:21:15.023305   64076 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:21:15.041045   64076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:21:15.051274   64076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:21:15.051297   64076 kubeadm.go:157] found existing configuration files:
	
	I0920 22:21:15.051341   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:21:15.061647   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:21:15.061697   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:21:15.072071   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:21:15.081989   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:21:15.082043   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:21:15.092302   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:21:15.102314   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:21:15.102372   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:21:15.112160   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:21:15.121675   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:21:15.121733   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:21:15.132459   64076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:21:15.143233   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:15.261579   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:16.912557   64076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.650944403s)
	I0920 22:21:16.912593   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.142147   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.220002   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.345551   64076 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:21:17.345659   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.846789   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.365656   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:18.366203   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:16.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.731672   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.232244   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.731640   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.231400   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.231727   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.731713   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.231984   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.732224   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.346730   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.385972   64076 api_server.go:72] duration metric: took 1.040420208s to wait for apiserver process to appear ...
	I0920 22:21:18.385999   64076 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:21:18.386027   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:18.386479   64076 api_server.go:269] stopped: https://192.168.72.19:8443/healthz: Get "https://192.168.72.19:8443/healthz": dial tcp 192.168.72.19:8443: connect: connection refused
	I0920 22:21:18.886429   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.873610   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.873654   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:20.873666   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.955077   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.955105   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:20.955121   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.968034   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.968062   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:21.386680   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:21.391109   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:21:21.391148   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:21:21.886726   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:21.891765   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:21:21.891802   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:21:22.386330   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:22.394934   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 200:
	ok
	I0920 22:21:22.404568   64076 api_server.go:141] control plane version: v1.31.1
	I0920 22:21:22.404599   64076 api_server.go:131] duration metric: took 4.018587856s to wait for apiserver health ...
	I0920 22:21:22.404608   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:21:22.404614   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:21:22.406538   64076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:21:19.930612   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:21.931883   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:22.408344   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:21:22.424308   64076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:21:22.460962   64076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:21:22.477489   64076 system_pods.go:59] 8 kube-system pods found
	I0920 22:21:22.477609   64076 system_pods.go:61] "coredns-7c65d6cfc9-pdcjb" [9c28694b-a51e-416c-9abc-39e692aa2f28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:21:22.477652   64076 system_pods.go:61] "etcd-no-preload-032884" [ee560f0c-02e1-4694-af7c-64a928e77b4d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:21:22.477685   64076 system_pods.go:61] "kube-apiserver-no-preload-032884" [9861f45c-6e90-4756-b045-2d8e3426dc9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:21:22.477706   64076 system_pods.go:61] "kube-controller-manager-no-preload-032884" [f16002bd-2240-4645-a49b-ef9e8c0588e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:21:22.477727   64076 system_pods.go:61] "kube-proxy-mp98d" [75b05d31-33ee-407f-80dc-360b71c17cba] Running
	I0920 22:21:22.477746   64076 system_pods.go:61] "kube-scheduler-no-preload-032884" [72404a91-c915-4f6e-9dbb-d8df10564d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:21:22.477772   64076 system_pods.go:61] "metrics-server-6867b74b74-x9qjw" [ae39ba1f-b2b9-42dc-9f7c-ac44997798aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:21:22.477792   64076 system_pods.go:61] "storage-provisioner" [cbd15398-a74b-4010-8fb1-f90e13d4f682] Running
	I0920 22:21:22.477811   64076 system_pods.go:74] duration metric: took 16.828942ms to wait for pod list to return data ...
	I0920 22:21:22.477835   64076 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:21:22.486981   64076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:21:22.487019   64076 node_conditions.go:123] node cpu capacity is 2
	I0920 22:21:22.487033   64076 node_conditions.go:105] duration metric: took 9.17624ms to run NodePressure ...
	I0920 22:21:22.487056   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:22.780784   64076 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:21:22.785269   64076 kubeadm.go:739] kubelet initialised
	I0920 22:21:22.785300   64076 kubeadm.go:740] duration metric: took 4.478578ms waiting for restarted kubelet to initialise ...
	I0920 22:21:22.785310   64076 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:21:22.800811   64076 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.810781   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.810819   64076 pod_ready.go:82] duration metric: took 9.97275ms for pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.810832   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.810841   64076 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.821154   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "etcd-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.821183   64076 pod_ready.go:82] duration metric: took 10.330752ms for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.821194   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "etcd-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.821202   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.827310   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "kube-apiserver-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.827352   64076 pod_ready.go:82] duration metric: took 6.141087ms for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.827365   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "kube-apiserver-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.827374   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.870394   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.870419   64076 pod_ready.go:82] duration metric: took 43.034455ms for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.870428   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.870434   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mp98d" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:23.264677   64076 pod_ready.go:93] pod "kube-proxy-mp98d" in "kube-system" namespace has status "Ready":"True"
	I0920 22:21:23.264706   64076 pod_ready.go:82] duration metric: took 394.262896ms for pod "kube-proxy-mp98d" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:23.264719   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:20.866321   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:22.866504   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:21.232197   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:21.732280   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.232223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.731989   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.231835   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.731412   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.232159   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.731357   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.731688   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.432658   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:26.930043   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:25.272790   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:27.772961   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:25.366481   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:27.866181   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:26.232179   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:26.731338   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.232243   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.731637   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.231903   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.732246   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.231673   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.732195   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.231646   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.731750   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.930324   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:30.930675   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:29.775001   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:31.771168   64076 pod_ready.go:93] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:21:31.771198   64076 pod_ready.go:82] duration metric: took 8.506469796s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:31.771210   64076 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:30.365604   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:32.366379   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:34.865420   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:31.232139   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:31.732146   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.231399   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.231350   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.731518   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.231301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.731329   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.231639   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.731944   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.430342   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:35.430373   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:37.430529   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:33.779291   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.277659   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:38.278264   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.865728   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:39.364784   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.231341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:36.732138   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.732107   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.232187   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.231592   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.732193   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.232273   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.732121   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.931150   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:42.430690   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:40.778518   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:43.276961   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:41.368031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:43.865439   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:41.231524   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:41.731282   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.232204   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.732205   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.232173   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.732209   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.231528   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.732008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.231572   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.731279   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:45.731374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:45.771664   65164 cri.go:89] found id: ""
	I0920 22:21:45.771685   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.771692   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:45.771697   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:45.771753   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:45.805953   65164 cri.go:89] found id: ""
	I0920 22:21:45.805983   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.805994   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:45.806001   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:45.806070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:45.841111   65164 cri.go:89] found id: ""
	I0920 22:21:45.841140   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.841152   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:45.841159   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:45.841217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:44.430726   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:46.431384   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.277558   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:47.277869   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.865916   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:48.366375   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.877729   65164 cri.go:89] found id: ""
	I0920 22:21:45.877750   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.877758   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:45.877764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:45.877814   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:45.919400   65164 cri.go:89] found id: ""
	I0920 22:21:45.919429   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.919440   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:45.919447   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:45.919504   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:45.957169   65164 cri.go:89] found id: ""
	I0920 22:21:45.957199   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.957220   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:45.957233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:45.957289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:46.001079   65164 cri.go:89] found id: ""
	I0920 22:21:46.001110   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.001121   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:46.001128   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:46.001194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:46.037774   65164 cri.go:89] found id: ""
	I0920 22:21:46.037806   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.037818   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:46.037830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:46.037843   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:46.079990   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:46.080016   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:46.129685   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:46.129718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:46.144045   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:46.144078   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:46.282272   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:46.282303   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:46.282318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:48.855633   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:48.871768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:48.871829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:48.911870   65164 cri.go:89] found id: ""
	I0920 22:21:48.911901   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.911913   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:48.911923   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:48.911991   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:48.950456   65164 cri.go:89] found id: ""
	I0920 22:21:48.950481   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.950492   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:48.950500   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:48.950562   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:48.988787   65164 cri.go:89] found id: ""
	I0920 22:21:48.988817   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.988827   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:48.988835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:48.988883   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:49.028550   65164 cri.go:89] found id: ""
	I0920 22:21:49.028580   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.028590   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:49.028600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:49.028659   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:49.066926   65164 cri.go:89] found id: ""
	I0920 22:21:49.066951   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.066977   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:49.066986   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:49.067047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:49.106777   65164 cri.go:89] found id: ""
	I0920 22:21:49.106807   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.106816   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:49.106822   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:49.106879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:49.143823   65164 cri.go:89] found id: ""
	I0920 22:21:49.143856   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.143867   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:49.143875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:49.143935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:49.181471   65164 cri.go:89] found id: ""
	I0920 22:21:49.181496   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.181504   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:49.181512   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:49.181523   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:49.266480   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:49.266517   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:49.326247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:49.326273   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:49.387982   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:49.388022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:49.404125   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:49.404157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:49.485426   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:48.932370   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.430552   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:49.278734   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.777609   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:50.865089   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:53.365218   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.985703   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:51.999662   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:51.999723   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:52.035758   65164 cri.go:89] found id: ""
	I0920 22:21:52.035790   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.035800   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:52.035808   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:52.035865   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:52.082846   65164 cri.go:89] found id: ""
	I0920 22:21:52.082869   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.082877   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:52.082883   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:52.082954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:52.119258   65164 cri.go:89] found id: ""
	I0920 22:21:52.119299   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.119311   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:52.119319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:52.119380   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:52.155609   65164 cri.go:89] found id: ""
	I0920 22:21:52.155635   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.155645   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:52.155652   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:52.155710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:52.191493   65164 cri.go:89] found id: ""
	I0920 22:21:52.191519   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.191527   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:52.191534   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:52.191608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:52.227088   65164 cri.go:89] found id: ""
	I0920 22:21:52.227121   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.227131   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:52.227138   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:52.227204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:52.264503   65164 cri.go:89] found id: ""
	I0920 22:21:52.264529   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.264538   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:52.264545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:52.264611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:52.302158   65164 cri.go:89] found id: ""
	I0920 22:21:52.302185   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.302193   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:52.302202   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:52.302217   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:52.356130   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:52.356168   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:52.373072   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:52.373096   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:52.446415   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:52.446437   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:52.446454   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:52.527964   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:52.528009   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.077589   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:55.091560   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:55.091615   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:55.127894   65164 cri.go:89] found id: ""
	I0920 22:21:55.127922   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.127949   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:55.127971   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:55.128030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:55.162522   65164 cri.go:89] found id: ""
	I0920 22:21:55.162546   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.162554   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:55.162562   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:55.162617   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:55.199445   65164 cri.go:89] found id: ""
	I0920 22:21:55.199470   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.199480   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:55.199488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:55.199546   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:55.239316   65164 cri.go:89] found id: ""
	I0920 22:21:55.239340   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.239350   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:55.239357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:55.239418   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:55.278165   65164 cri.go:89] found id: ""
	I0920 22:21:55.278194   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.278204   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:55.278212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:55.278270   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:55.313143   65164 cri.go:89] found id: ""
	I0920 22:21:55.313173   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.313184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:55.313192   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:55.313261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:55.352661   65164 cri.go:89] found id: ""
	I0920 22:21:55.352691   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.352699   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:55.352704   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:55.352751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:55.390108   65164 cri.go:89] found id: ""
	I0920 22:21:55.390147   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.390158   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:55.390169   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:55.390184   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:55.403652   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:55.403680   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:55.490456   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:55.490477   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:55.490488   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:55.573164   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:55.573202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.615158   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:55.615188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:53.430808   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.431168   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:57.431504   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:53.778433   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.778670   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:58.278404   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.365277   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:57.365842   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:59.365976   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:58.165907   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:58.180329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:58.180397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:58.215920   65164 cri.go:89] found id: ""
	I0920 22:21:58.215946   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.215954   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:58.215973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:58.216030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:58.253178   65164 cri.go:89] found id: ""
	I0920 22:21:58.253211   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.253222   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:58.253229   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:58.253287   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:58.289345   65164 cri.go:89] found id: ""
	I0920 22:21:58.289373   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.289384   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:58.289391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:58.289451   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:58.326430   65164 cri.go:89] found id: ""
	I0920 22:21:58.326460   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.326468   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:58.326476   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:58.326535   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:58.362723   65164 cri.go:89] found id: ""
	I0920 22:21:58.362749   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.362761   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:58.362769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:58.362824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:58.398827   65164 cri.go:89] found id: ""
	I0920 22:21:58.398857   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.398867   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:58.398875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:58.398928   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:58.444833   65164 cri.go:89] found id: ""
	I0920 22:21:58.444864   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.444875   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:58.444883   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:58.444948   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:58.482692   65164 cri.go:89] found id: ""
	I0920 22:21:58.482723   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.482735   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:58.482745   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:58.482756   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:58.567138   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:58.567172   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:58.606392   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:58.606421   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:58.658568   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:58.658605   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:58.672830   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:58.672855   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:58.742602   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:59.930574   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:02.431132   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:00.279246   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:02.778434   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:01.865439   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:04.366915   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:01.243532   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:01.257408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:01.257493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:01.295259   65164 cri.go:89] found id: ""
	I0920 22:22:01.295284   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.295292   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:01.295297   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:01.295348   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:01.334119   65164 cri.go:89] found id: ""
	I0920 22:22:01.334145   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.334157   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:01.334164   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:01.334222   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:01.376955   65164 cri.go:89] found id: ""
	I0920 22:22:01.376985   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.376997   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:01.377005   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:01.377063   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:01.416680   65164 cri.go:89] found id: ""
	I0920 22:22:01.416703   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.416712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:01.416718   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:01.416783   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:01.455042   65164 cri.go:89] found id: ""
	I0920 22:22:01.455064   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.455072   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:01.455077   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:01.455135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:01.499243   65164 cri.go:89] found id: ""
	I0920 22:22:01.499271   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.499283   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:01.499290   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:01.499347   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:01.535757   65164 cri.go:89] found id: ""
	I0920 22:22:01.535793   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.535804   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:01.535812   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:01.535876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:01.574409   65164 cri.go:89] found id: ""
	I0920 22:22:01.574447   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.574459   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:01.574470   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:01.574485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:01.615629   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:01.615659   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:01.668446   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:01.668481   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:01.683044   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:01.683071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:01.751190   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:01.751210   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:01.751221   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.335881   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:04.351092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:04.351149   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:04.386306   65164 cri.go:89] found id: ""
	I0920 22:22:04.386337   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.386347   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:04.386365   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:04.386424   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:04.419440   65164 cri.go:89] found id: ""
	I0920 22:22:04.419472   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.419502   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:04.419511   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:04.419571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:04.456005   65164 cri.go:89] found id: ""
	I0920 22:22:04.456043   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.456054   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:04.456061   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:04.456125   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:04.490878   65164 cri.go:89] found id: ""
	I0920 22:22:04.490909   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.490919   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:04.490927   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:04.491000   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:04.527312   65164 cri.go:89] found id: ""
	I0920 22:22:04.527342   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.527353   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:04.527361   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:04.527441   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:04.564441   65164 cri.go:89] found id: ""
	I0920 22:22:04.564471   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.564481   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:04.564487   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:04.564542   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:04.598843   65164 cri.go:89] found id: ""
	I0920 22:22:04.598870   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.598878   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:04.598884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:04.598946   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:04.632575   65164 cri.go:89] found id: ""
	I0920 22:22:04.632604   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.632615   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:04.632625   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:04.632640   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:04.683629   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:04.683668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:04.697138   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:04.697169   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:04.768051   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:04.768076   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:04.768087   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.848054   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:04.848102   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:04.930934   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.431086   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:05.279145   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.778351   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:06.865475   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:08.865586   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.396960   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:07.410612   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:07.410681   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:07.450652   65164 cri.go:89] found id: ""
	I0920 22:22:07.450680   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.450688   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:07.450694   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:07.450739   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:07.491547   65164 cri.go:89] found id: ""
	I0920 22:22:07.491577   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.491588   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:07.491595   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:07.491652   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:07.532025   65164 cri.go:89] found id: ""
	I0920 22:22:07.532060   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.532069   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:07.532076   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:07.532135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:07.574139   65164 cri.go:89] found id: ""
	I0920 22:22:07.574175   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.574186   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:07.574194   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:07.574257   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:07.614572   65164 cri.go:89] found id: ""
	I0920 22:22:07.614599   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.614610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:07.614618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:07.614679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:07.651975   65164 cri.go:89] found id: ""
	I0920 22:22:07.652012   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.652022   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:07.652029   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:07.652091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:07.690273   65164 cri.go:89] found id: ""
	I0920 22:22:07.690300   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.690311   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:07.690319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:07.690379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:07.727144   65164 cri.go:89] found id: ""
	I0920 22:22:07.727169   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.727179   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:07.727189   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:07.727204   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:07.784607   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:07.784645   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:07.800082   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:07.800113   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:07.872842   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:07.872870   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:07.872885   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:07.953169   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:07.953205   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:10.496094   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:10.509645   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:10.509711   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:10.544590   65164 cri.go:89] found id: ""
	I0920 22:22:10.544620   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.544628   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:10.544634   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:10.544690   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:10.581062   65164 cri.go:89] found id: ""
	I0920 22:22:10.581091   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.581100   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:10.581106   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:10.581166   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:10.616337   65164 cri.go:89] found id: ""
	I0920 22:22:10.616363   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.616372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:10.616380   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:10.616438   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:10.655747   65164 cri.go:89] found id: ""
	I0920 22:22:10.655769   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.655777   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:10.655784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:10.655832   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:10.689158   65164 cri.go:89] found id: ""
	I0920 22:22:10.689188   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.689199   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:10.689207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:10.689269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:10.724321   65164 cri.go:89] found id: ""
	I0920 22:22:10.724355   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.724369   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:10.724376   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:10.724447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:10.764105   65164 cri.go:89] found id: ""
	I0920 22:22:10.764134   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.764143   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:10.764148   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:10.764217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:10.802620   65164 cri.go:89] found id: ""
	I0920 22:22:10.802650   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.802664   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:10.802674   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:10.802687   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:10.854739   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:10.854773   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:09.931917   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:12.430948   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.279412   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:12.778166   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.866822   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:13.366013   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.869974   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:10.869996   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:10.945364   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:10.945386   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:10.945399   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:11.029379   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:11.029419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:13.576924   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:13.591641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:13.591716   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:13.626683   65164 cri.go:89] found id: ""
	I0920 22:22:13.626710   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.626718   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:13.626724   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:13.626770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:13.666368   65164 cri.go:89] found id: ""
	I0920 22:22:13.666400   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.666408   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:13.666413   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:13.666469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:13.702596   65164 cri.go:89] found id: ""
	I0920 22:22:13.702618   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.702634   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:13.702642   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:13.702706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:13.736359   65164 cri.go:89] found id: ""
	I0920 22:22:13.736382   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.736390   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:13.736396   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:13.736446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:13.770584   65164 cri.go:89] found id: ""
	I0920 22:22:13.770615   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.770626   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:13.770633   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:13.770693   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:13.808637   65164 cri.go:89] found id: ""
	I0920 22:22:13.808668   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.808680   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:13.808688   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:13.808748   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:13.844429   65164 cri.go:89] found id: ""
	I0920 22:22:13.844476   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.844489   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:13.844497   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:13.844563   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:13.882730   65164 cri.go:89] found id: ""
	I0920 22:22:13.882759   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.882770   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:13.882778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:13.882790   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:13.939998   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:13.940028   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:13.954014   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:13.954043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:14.031991   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:14.032016   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:14.032030   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:14.109378   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:14.109413   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:14.431794   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:16.932980   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:15.277777   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:17.779835   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:15.865687   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:17.865735   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:19.866465   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:16.646761   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:16.660458   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:16.660522   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:16.695512   65164 cri.go:89] found id: ""
	I0920 22:22:16.695537   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.695545   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:16.695552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:16.695612   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:16.740479   65164 cri.go:89] found id: ""
	I0920 22:22:16.740508   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.740518   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:16.740529   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:16.740589   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:16.777259   65164 cri.go:89] found id: ""
	I0920 22:22:16.777291   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.777302   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:16.777309   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:16.777355   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:16.833157   65164 cri.go:89] found id: ""
	I0920 22:22:16.833187   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.833204   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:16.833212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:16.833272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:16.868589   65164 cri.go:89] found id: ""
	I0920 22:22:16.868618   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.868627   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:16.868635   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:16.868694   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:16.907229   65164 cri.go:89] found id: ""
	I0920 22:22:16.907257   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.907269   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:16.907277   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:16.907336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:16.945317   65164 cri.go:89] found id: ""
	I0920 22:22:16.945343   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.945351   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:16.945357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:16.945413   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:16.982159   65164 cri.go:89] found id: ""
	I0920 22:22:16.982183   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.982197   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:16.982206   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:16.982218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:17.034369   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:17.034416   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:17.049644   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:17.049677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:17.127816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:17.127840   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:17.127852   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:17.217213   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:17.217248   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:19.766957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:19.781545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:19.781627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:19.819616   65164 cri.go:89] found id: ""
	I0920 22:22:19.819642   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.819650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:19.819656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:19.819717   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:19.860983   65164 cri.go:89] found id: ""
	I0920 22:22:19.861009   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.861020   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:19.861027   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:19.861083   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:19.904051   65164 cri.go:89] found id: ""
	I0920 22:22:19.904075   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.904083   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:19.904088   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:19.904153   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:19.940246   65164 cri.go:89] found id: ""
	I0920 22:22:19.940273   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.940280   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:19.940286   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:19.940332   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:19.980380   65164 cri.go:89] found id: ""
	I0920 22:22:19.980404   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.980412   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:19.980418   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:19.980496   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:20.019603   65164 cri.go:89] found id: ""
	I0920 22:22:20.019625   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.019633   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:20.019639   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:20.019689   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:20.060667   65164 cri.go:89] found id: ""
	I0920 22:22:20.060696   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.060704   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:20.060709   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:20.060766   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:20.104432   65164 cri.go:89] found id: ""
	I0920 22:22:20.104464   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.104475   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:20.104486   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:20.104500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:20.148433   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:20.148466   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:20.202370   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:20.202407   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:20.216482   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:20.216515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:20.297053   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:20.297071   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:20.297086   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:18.936098   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:21.430878   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:20.277726   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.777347   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.366027   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:24.366241   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.879569   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:22.893624   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:22.893683   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:22.937850   65164 cri.go:89] found id: ""
	I0920 22:22:22.937878   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.937886   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:22.937896   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:22.937944   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:22.974939   65164 cri.go:89] found id: ""
	I0920 22:22:22.974971   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.974983   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:22.974990   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:22.975057   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:23.013689   65164 cri.go:89] found id: ""
	I0920 22:22:23.013723   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.013735   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:23.013742   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:23.013802   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:23.052092   65164 cri.go:89] found id: ""
	I0920 22:22:23.052123   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.052134   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:23.052141   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:23.052201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:23.092204   65164 cri.go:89] found id: ""
	I0920 22:22:23.092231   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.092248   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:23.092255   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:23.092317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:23.146265   65164 cri.go:89] found id: ""
	I0920 22:22:23.146294   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.146304   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:23.146312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:23.146373   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:23.183802   65164 cri.go:89] found id: ""
	I0920 22:22:23.183832   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.183843   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:23.183865   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:23.183932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:23.219604   65164 cri.go:89] found id: ""
	I0920 22:22:23.219633   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.219643   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:23.219654   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:23.219671   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:23.268138   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:23.268171   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:23.285309   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:23.285343   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:23.362940   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:23.362985   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:23.363006   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:23.450369   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:23.450411   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:23.431589   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:25.930565   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:24.778456   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:27.278474   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:26.866577   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:29.364505   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:26.001038   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:26.016893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:26.016956   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:26.053611   65164 cri.go:89] found id: ""
	I0920 22:22:26.053641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.053653   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:26.053661   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:26.053710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:26.093966   65164 cri.go:89] found id: ""
	I0920 22:22:26.093994   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.094005   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:26.094012   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:26.094077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:26.134321   65164 cri.go:89] found id: ""
	I0920 22:22:26.134360   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.134372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:26.134379   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:26.134442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:26.171597   65164 cri.go:89] found id: ""
	I0920 22:22:26.171641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.171651   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:26.171659   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:26.171730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:26.207751   65164 cri.go:89] found id: ""
	I0920 22:22:26.207778   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.207787   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:26.207793   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:26.207845   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:26.249174   65164 cri.go:89] found id: ""
	I0920 22:22:26.249201   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.249217   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:26.249225   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:26.249286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:26.284627   65164 cri.go:89] found id: ""
	I0920 22:22:26.284656   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.284666   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:26.284673   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:26.284729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:26.323942   65164 cri.go:89] found id: ""
	I0920 22:22:26.323968   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.323980   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:26.323991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:26.324005   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:26.379698   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:26.379737   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:26.394553   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:26.394584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:26.472301   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:26.472328   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:26.472354   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:26.549464   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:26.549509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.091838   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:29.115120   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:29.115198   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:29.161609   65164 cri.go:89] found id: ""
	I0920 22:22:29.161632   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.161640   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:29.161646   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:29.161709   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:29.197886   65164 cri.go:89] found id: ""
	I0920 22:22:29.197912   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.197920   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:29.197926   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:29.197978   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:29.236953   65164 cri.go:89] found id: ""
	I0920 22:22:29.236980   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.236991   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:29.236999   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:29.237059   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:29.273573   65164 cri.go:89] found id: ""
	I0920 22:22:29.273603   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.273614   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:29.273622   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:29.273679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:29.308752   65164 cri.go:89] found id: ""
	I0920 22:22:29.308776   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.308784   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:29.308789   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:29.308847   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:29.348578   65164 cri.go:89] found id: ""
	I0920 22:22:29.348608   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.348620   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:29.348627   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:29.348685   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:29.385184   65164 cri.go:89] found id: ""
	I0920 22:22:29.385225   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.385237   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:29.385246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:29.385308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:29.421935   65164 cri.go:89] found id: ""
	I0920 22:22:29.421968   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.421979   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:29.421990   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:29.422012   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:29.438574   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:29.438606   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:29.515334   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:29.515362   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:29.515376   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:29.602411   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:29.602451   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.646284   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:29.646320   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:27.931022   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:30.430935   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:32.431673   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:29.278850   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:31.778403   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:31.364592   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:33.864487   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:32.196699   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:32.211260   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:32.211338   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:32.248007   65164 cri.go:89] found id: ""
	I0920 22:22:32.248040   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.248052   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:32.248060   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:32.248122   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:32.285451   65164 cri.go:89] found id: ""
	I0920 22:22:32.285481   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.285490   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:32.285496   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:32.285548   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:32.321721   65164 cri.go:89] found id: ""
	I0920 22:22:32.321746   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.321756   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:32.321768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:32.321827   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:32.363669   65164 cri.go:89] found id: ""
	I0920 22:22:32.363698   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.363708   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:32.363717   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:32.363770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:32.402681   65164 cri.go:89] found id: ""
	I0920 22:22:32.402721   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.402729   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:32.402735   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:32.402786   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:32.445648   65164 cri.go:89] found id: ""
	I0920 22:22:32.445674   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.445684   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:32.445691   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:32.445750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:32.482291   65164 cri.go:89] found id: ""
	I0920 22:22:32.482315   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.482330   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:32.482337   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:32.482397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:32.523095   65164 cri.go:89] found id: ""
	I0920 22:22:32.523124   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.523134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:32.523145   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:32.523160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:32.537459   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:32.537489   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:32.614828   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:32.614853   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:32.614867   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:32.695690   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:32.695728   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:32.739620   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:32.739658   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:35.296323   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:35.312341   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:35.312417   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:35.349934   65164 cri.go:89] found id: ""
	I0920 22:22:35.349962   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.349972   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:35.349980   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:35.350052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:35.385427   65164 cri.go:89] found id: ""
	I0920 22:22:35.385457   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.385468   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:35.385475   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:35.385538   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:35.422118   65164 cri.go:89] found id: ""
	I0920 22:22:35.422146   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.422157   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:35.422165   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:35.422230   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:35.458552   65164 cri.go:89] found id: ""
	I0920 22:22:35.458573   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.458580   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:35.458586   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:35.458639   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:35.493575   65164 cri.go:89] found id: ""
	I0920 22:22:35.493601   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.493610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:35.493615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:35.493671   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:35.530316   65164 cri.go:89] found id: ""
	I0920 22:22:35.530342   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.530350   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:35.530356   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:35.530411   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:35.568247   65164 cri.go:89] found id: ""
	I0920 22:22:35.568279   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.568289   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:35.568303   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:35.568367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:35.605207   65164 cri.go:89] found id: ""
	I0920 22:22:35.605234   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.605242   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:35.605255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:35.605267   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:35.619771   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:35.619797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:35.699917   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:35.699939   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:35.699950   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:35.785310   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:35.785346   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:35.849112   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:35.849147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:34.930225   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:36.931840   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:34.277310   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:36.278180   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:35.866985   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.364684   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.405695   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:38.419499   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:38.419580   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:38.461618   65164 cri.go:89] found id: ""
	I0920 22:22:38.461642   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.461650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:38.461656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:38.461713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:38.504837   65164 cri.go:89] found id: ""
	I0920 22:22:38.504859   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.504867   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:38.504872   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:38.504923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:38.539399   65164 cri.go:89] found id: ""
	I0920 22:22:38.539434   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.539446   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:38.539453   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:38.539518   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:38.576129   65164 cri.go:89] found id: ""
	I0920 22:22:38.576155   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.576165   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:38.576173   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:38.576233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:38.611383   65164 cri.go:89] found id: ""
	I0920 22:22:38.611411   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.611423   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:38.611430   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:38.611491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:38.650553   65164 cri.go:89] found id: ""
	I0920 22:22:38.650583   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.650594   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:38.650602   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:38.650668   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:38.686734   65164 cri.go:89] found id: ""
	I0920 22:22:38.686756   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.686764   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:38.686769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:38.686829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:38.726268   65164 cri.go:89] found id: ""
	I0920 22:22:38.726301   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.726312   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:38.726323   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:38.726336   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:38.801995   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:38.802042   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:38.843775   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:38.843811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:38.897874   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:38.897909   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:38.912111   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:38.912136   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:38.989021   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:38.932311   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.432699   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.778194   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.277875   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:43.278108   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:40.364970   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:42.366255   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:44.865678   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.489341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:41.515667   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:41.515724   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:41.567019   65164 cri.go:89] found id: ""
	I0920 22:22:41.567047   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.567057   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:41.567066   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:41.567128   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:41.625316   65164 cri.go:89] found id: ""
	I0920 22:22:41.625347   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.625357   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:41.625371   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:41.625436   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:41.663420   65164 cri.go:89] found id: ""
	I0920 22:22:41.663448   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.663459   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:41.663467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:41.663529   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:41.701639   65164 cri.go:89] found id: ""
	I0920 22:22:41.701670   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.701682   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:41.701689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:41.701751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:41.739740   65164 cri.go:89] found id: ""
	I0920 22:22:41.739768   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.739778   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:41.739786   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:41.739843   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:41.781365   65164 cri.go:89] found id: ""
	I0920 22:22:41.781390   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.781401   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:41.781408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:41.781467   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:41.818708   65164 cri.go:89] found id: ""
	I0920 22:22:41.818732   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.818740   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:41.818746   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:41.818793   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:41.854193   65164 cri.go:89] found id: ""
	I0920 22:22:41.854226   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.854237   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:41.854249   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:41.854266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:41.923678   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:41.923701   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:41.923716   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:42.006009   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:42.006045   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:42.044745   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:42.044777   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:42.099462   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:42.099500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:44.614787   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:44.629100   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:44.629178   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:44.672347   65164 cri.go:89] found id: ""
	I0920 22:22:44.672381   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.672392   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:44.672401   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:44.672525   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:44.717617   65164 cri.go:89] found id: ""
	I0920 22:22:44.717646   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.717657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:44.717664   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:44.717730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:44.753035   65164 cri.go:89] found id: ""
	I0920 22:22:44.753062   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.753073   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:44.753092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:44.753165   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:44.790745   65164 cri.go:89] found id: ""
	I0920 22:22:44.790774   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.790785   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:44.790792   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:44.790859   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:44.828604   65164 cri.go:89] found id: ""
	I0920 22:22:44.828630   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.828640   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:44.828647   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:44.828706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:44.865982   65164 cri.go:89] found id: ""
	I0920 22:22:44.866008   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.866019   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:44.866026   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:44.866077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:44.903473   65164 cri.go:89] found id: ""
	I0920 22:22:44.903499   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.903508   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:44.903514   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:44.903571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:44.943481   65164 cri.go:89] found id: ""
	I0920 22:22:44.943509   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.943521   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:44.943530   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:44.943543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:44.996783   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:44.996823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:45.011531   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:45.011560   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:45.087225   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:45.087252   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:45.087266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:45.166460   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:45.166500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:43.931267   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:46.430497   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:45.778462   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.783057   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.365053   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:49.864478   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.709426   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:47.723507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:47.723590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:47.758203   65164 cri.go:89] found id: ""
	I0920 22:22:47.758228   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.758238   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:47.758246   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:47.758304   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:47.796098   65164 cri.go:89] found id: ""
	I0920 22:22:47.796129   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.796140   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:47.796148   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:47.796210   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:47.831860   65164 cri.go:89] found id: ""
	I0920 22:22:47.831889   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.831899   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:47.831906   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:47.831966   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:47.866241   65164 cri.go:89] found id: ""
	I0920 22:22:47.866266   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.866274   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:47.866281   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:47.866337   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:47.905029   65164 cri.go:89] found id: ""
	I0920 22:22:47.905055   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.905063   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:47.905073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:47.905124   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:47.942290   65164 cri.go:89] found id: ""
	I0920 22:22:47.942322   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.942332   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:47.942340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:47.942400   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:47.977901   65164 cri.go:89] found id: ""
	I0920 22:22:47.977930   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.977939   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:47.977946   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:47.978012   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:48.014682   65164 cri.go:89] found id: ""
	I0920 22:22:48.014715   65164 logs.go:276] 0 containers: []
	W0920 22:22:48.014726   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:48.014736   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:48.014749   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:48.067116   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:48.067160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:48.082926   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:48.082980   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:48.158209   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:48.158231   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:48.158243   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:48.239760   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:48.239866   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:50.788520   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:50.802410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:50.802485   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:50.844263   65164 cri.go:89] found id: ""
	I0920 22:22:50.844291   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.844303   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:50.844311   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:50.844367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:48.931174   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:51.431910   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:50.277510   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:52.279387   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:51.866270   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:54.366161   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:50.880271   65164 cri.go:89] found id: ""
	I0920 22:22:50.880301   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.880311   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:50.880324   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:50.880379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:50.915627   65164 cri.go:89] found id: ""
	I0920 22:22:50.915653   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.915663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:50.915671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:50.915725   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:50.951779   65164 cri.go:89] found id: ""
	I0920 22:22:50.951804   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.951823   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:50.951831   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:50.951879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:50.989174   65164 cri.go:89] found id: ""
	I0920 22:22:50.989201   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.989212   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:50.989219   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:50.989293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:51.024049   65164 cri.go:89] found id: ""
	I0920 22:22:51.024084   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.024094   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:51.024106   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:51.024167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:51.060837   65164 cri.go:89] found id: ""
	I0920 22:22:51.060865   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.060877   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:51.060884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:51.060934   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:51.099248   65164 cri.go:89] found id: ""
	I0920 22:22:51.099272   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.099285   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:51.099295   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:51.099308   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:51.148388   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:51.148420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:51.162735   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:51.162763   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:51.239020   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:51.239048   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:51.239063   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:51.318893   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:51.318934   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:53.860664   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:53.874434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:53.874491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:53.909449   65164 cri.go:89] found id: ""
	I0920 22:22:53.909475   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.909485   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:53.909493   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:53.909550   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:53.944367   65164 cri.go:89] found id: ""
	I0920 22:22:53.944395   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.944406   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:53.944414   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:53.944469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:53.980994   65164 cri.go:89] found id: ""
	I0920 22:22:53.981032   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.981043   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:53.981051   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:53.981110   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:54.018456   65164 cri.go:89] found id: ""
	I0920 22:22:54.018494   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.018506   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:54.018515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:54.018574   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:54.053763   65164 cri.go:89] found id: ""
	I0920 22:22:54.053793   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.053805   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:54.053813   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:54.053871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:54.091847   65164 cri.go:89] found id: ""
	I0920 22:22:54.091871   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.091883   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:54.091890   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:54.091951   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:54.126844   65164 cri.go:89] found id: ""
	I0920 22:22:54.126875   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.126885   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:54.126893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:54.126949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:54.168967   65164 cri.go:89] found id: ""
	I0920 22:22:54.168999   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.169010   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:54.169022   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:54.169043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:54.237771   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:54.237791   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:54.237806   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:54.319629   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:54.319668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:54.364470   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:54.364504   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:54.418798   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:54.418835   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:53.931126   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.430892   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:54.777269   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.777539   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.865494   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:59.365274   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.935104   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:56.948472   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:56.948545   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:56.983338   65164 cri.go:89] found id: ""
	I0920 22:22:56.983365   65164 logs.go:276] 0 containers: []
	W0920 22:22:56.983373   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:56.983379   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:56.983425   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:57.021781   65164 cri.go:89] found id: ""
	I0920 22:22:57.021809   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.021819   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:57.021825   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:57.021871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:57.057191   65164 cri.go:89] found id: ""
	I0920 22:22:57.057231   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.057239   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:57.057246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:57.057294   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:57.092567   65164 cri.go:89] found id: ""
	I0920 22:22:57.092595   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.092607   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:57.092615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:57.092669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:57.128098   65164 cri.go:89] found id: ""
	I0920 22:22:57.128125   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.128136   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:57.128143   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:57.128205   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:57.163001   65164 cri.go:89] found id: ""
	I0920 22:22:57.163029   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.163040   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:57.163048   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:57.163105   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:57.199392   65164 cri.go:89] found id: ""
	I0920 22:22:57.199424   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.199434   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:57.199442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:57.199491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:57.235132   65164 cri.go:89] found id: ""
	I0920 22:22:57.235155   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.235162   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:57.235170   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:57.235181   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:57.287112   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:57.287148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:57.301293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:57.301318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:57.379895   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:57.379920   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:57.379935   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:57.466022   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:57.466059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.006738   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:00.019871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:00.019949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:00.054127   65164 cri.go:89] found id: ""
	I0920 22:23:00.054155   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.054162   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:00.054168   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:00.054228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:00.089755   65164 cri.go:89] found id: ""
	I0920 22:23:00.089788   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.089800   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:00.089807   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:00.089867   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:00.125466   65164 cri.go:89] found id: ""
	I0920 22:23:00.125489   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.125497   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:00.125503   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:00.125559   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:00.166147   65164 cri.go:89] found id: ""
	I0920 22:23:00.166171   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.166178   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:00.166187   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:00.166248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:00.202752   65164 cri.go:89] found id: ""
	I0920 22:23:00.202781   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.202793   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:00.202801   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:00.202862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:00.240727   65164 cri.go:89] found id: ""
	I0920 22:23:00.240757   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.240768   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:00.240776   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:00.240833   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:00.277247   65164 cri.go:89] found id: ""
	I0920 22:23:00.277270   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.277278   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:00.277285   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:00.277341   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:00.313098   65164 cri.go:89] found id: ""
	I0920 22:23:00.313123   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.313134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:00.313144   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:00.313160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:00.328326   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:00.328353   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:00.397929   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:00.397961   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:00.397975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:00.479830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:00.479862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.519715   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:00.519745   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:58.929948   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:00.930012   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:58.778239   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:00.778442   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.279610   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:01.366521   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.366771   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.073360   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:03.088169   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:03.088236   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:03.129646   65164 cri.go:89] found id: ""
	I0920 22:23:03.129673   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.129682   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:03.129690   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:03.129751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:03.163771   65164 cri.go:89] found id: ""
	I0920 22:23:03.163801   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.163812   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:03.163820   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:03.163876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:03.201253   65164 cri.go:89] found id: ""
	I0920 22:23:03.201291   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.201299   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:03.201317   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:03.201374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:03.239623   65164 cri.go:89] found id: ""
	I0920 22:23:03.239652   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.239663   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:03.239670   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:03.239731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:03.275530   65164 cri.go:89] found id: ""
	I0920 22:23:03.275555   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.275564   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:03.275572   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:03.275631   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:03.312812   65164 cri.go:89] found id: ""
	I0920 22:23:03.312845   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.312855   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:03.312863   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:03.312923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:03.352975   65164 cri.go:89] found id: ""
	I0920 22:23:03.352998   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.353009   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:03.353016   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:03.353070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:03.390815   65164 cri.go:89] found id: ""
	I0920 22:23:03.390846   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.390856   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:03.390867   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:03.390880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:03.478231   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:03.478269   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:03.524991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:03.525024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:03.580114   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:03.580159   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:03.596120   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:03.596147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:03.671302   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:02.931281   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.429671   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:07.430177   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.781406   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:08.278876   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.367069   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:07.866058   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:09.866187   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:06.171484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:06.186410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:06.186480   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:06.222102   65164 cri.go:89] found id: ""
	I0920 22:23:06.222131   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.222139   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:06.222148   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:06.222206   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:06.257922   65164 cri.go:89] found id: ""
	I0920 22:23:06.257945   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.257953   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:06.257958   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:06.258006   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:06.299149   65164 cri.go:89] found id: ""
	I0920 22:23:06.299176   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.299184   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:06.299190   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:06.299244   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:06.336667   65164 cri.go:89] found id: ""
	I0920 22:23:06.336693   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.336703   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:06.336710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:06.336772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:06.379397   65164 cri.go:89] found id: ""
	I0920 22:23:06.379425   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.379435   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:06.379442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:06.379505   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:06.415945   65164 cri.go:89] found id: ""
	I0920 22:23:06.415976   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.415987   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:06.415993   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:06.416051   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:06.452902   65164 cri.go:89] found id: ""
	I0920 22:23:06.452932   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.452955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:06.452962   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:06.453033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:06.490284   65164 cri.go:89] found id: ""
	I0920 22:23:06.490314   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.490324   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:06.490334   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:06.490351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:06.529838   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:06.529862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:06.582421   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:06.582461   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:06.596504   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:06.596542   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:06.667816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:06.667836   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:06.667851   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.247952   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:09.264316   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:09.264386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:09.300577   65164 cri.go:89] found id: ""
	I0920 22:23:09.300612   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.300621   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:09.300627   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:09.300674   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:09.345945   65164 cri.go:89] found id: ""
	I0920 22:23:09.345972   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.345982   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:09.345991   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:09.346049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:09.385660   65164 cri.go:89] found id: ""
	I0920 22:23:09.385690   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.385700   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:09.385708   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:09.385775   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:09.422286   65164 cri.go:89] found id: ""
	I0920 22:23:09.422321   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.422332   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:09.422338   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:09.422397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:09.458377   65164 cri.go:89] found id: ""
	I0920 22:23:09.458402   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.458410   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:09.458416   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:09.458477   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:09.499470   65164 cri.go:89] found id: ""
	I0920 22:23:09.499498   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.499509   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:09.499517   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:09.499603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:09.535725   65164 cri.go:89] found id: ""
	I0920 22:23:09.535751   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.535759   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:09.535764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:09.535812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:09.575755   65164 cri.go:89] found id: ""
	I0920 22:23:09.575784   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.575795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:09.575807   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:09.575825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:09.590265   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:09.590294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:09.661559   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:09.661585   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:09.661617   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.741099   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:09.741143   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:09.785560   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:09.785589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:09.431051   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:11.930405   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:10.781647   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:13.278000   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:12.366060   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:14.864332   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:12.344085   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:12.358919   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:12.359021   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:12.397182   65164 cri.go:89] found id: ""
	I0920 22:23:12.397220   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.397229   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:12.397235   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:12.397293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:12.435133   65164 cri.go:89] found id: ""
	I0920 22:23:12.435162   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.435169   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:12.435177   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:12.435237   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:12.474391   65164 cri.go:89] found id: ""
	I0920 22:23:12.474416   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.474427   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:12.474434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:12.474493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:12.511992   65164 cri.go:89] found id: ""
	I0920 22:23:12.512015   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.512022   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:12.512028   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:12.512071   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:12.547248   65164 cri.go:89] found id: ""
	I0920 22:23:12.547269   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.547278   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:12.547289   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:12.547336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:12.582721   65164 cri.go:89] found id: ""
	I0920 22:23:12.582747   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.582759   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:12.582766   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:12.582837   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:12.620184   65164 cri.go:89] found id: ""
	I0920 22:23:12.620207   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.620215   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:12.620220   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:12.620272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:12.657416   65164 cri.go:89] found id: ""
	I0920 22:23:12.657441   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.657449   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:12.657458   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:12.657477   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:12.671189   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:12.671218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:12.742115   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:12.742139   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:12.742150   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:12.817551   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:12.817586   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:12.861940   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:12.861970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:15.413615   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:15.427127   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:15.427201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:15.462329   65164 cri.go:89] found id: ""
	I0920 22:23:15.462357   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.462366   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:15.462373   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:15.462430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:15.508791   65164 cri.go:89] found id: ""
	I0920 22:23:15.508816   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.508824   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:15.508834   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:15.508894   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:15.545381   65164 cri.go:89] found id: ""
	I0920 22:23:15.545403   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.545411   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:15.545417   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:15.545475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:15.585401   65164 cri.go:89] found id: ""
	I0920 22:23:15.585422   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.585430   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:15.585436   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:15.585494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:15.631507   65164 cri.go:89] found id: ""
	I0920 22:23:15.631529   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.631542   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:15.631549   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:15.631608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:15.671219   65164 cri.go:89] found id: ""
	I0920 22:23:15.671247   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.671270   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:15.671278   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:15.671333   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:15.711348   65164 cri.go:89] found id: ""
	I0920 22:23:15.711382   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.711393   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:15.711400   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:15.711463   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:15.750582   65164 cri.go:89] found id: ""
	I0920 22:23:15.750614   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.750624   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:15.750635   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:15.750649   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:15.766294   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:15.766321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:15.839079   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:15.839104   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:15.839117   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:14.430673   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:16.430824   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:15.278107   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:17.778678   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:16.866212   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:18.866838   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:15.920123   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:15.920156   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:15.964292   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:15.964321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.517484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:18.531117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:18.531192   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:18.567351   65164 cri.go:89] found id: ""
	I0920 22:23:18.567377   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.567389   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:18.567397   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:18.567447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:18.603319   65164 cri.go:89] found id: ""
	I0920 22:23:18.603344   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.603353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:18.603358   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:18.603408   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:18.639699   65164 cri.go:89] found id: ""
	I0920 22:23:18.639726   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.639737   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:18.639744   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:18.639806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:18.681880   65164 cri.go:89] found id: ""
	I0920 22:23:18.681910   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.681931   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:18.681939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:18.682002   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:18.716589   65164 cri.go:89] found id: ""
	I0920 22:23:18.716619   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.716629   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:18.716637   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:18.716696   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:18.755280   65164 cri.go:89] found id: ""
	I0920 22:23:18.755310   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.755321   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:18.755329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:18.755390   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:18.791915   65164 cri.go:89] found id: ""
	I0920 22:23:18.791944   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.791955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:18.791963   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:18.792033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:18.831544   65164 cri.go:89] found id: ""
	I0920 22:23:18.831568   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.831576   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:18.831584   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:18.831593   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.882024   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:18.882059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:18.898191   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:18.898226   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:18.970613   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:18.970643   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:18.970657   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:19.046112   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:19.046145   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:18.931618   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.431146   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:20.278885   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:22.777640   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.366031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:23.865349   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.586078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:21.599467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:21.599541   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:21.638708   65164 cri.go:89] found id: ""
	I0920 22:23:21.638733   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.638741   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:21.638747   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:21.638851   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:21.680546   65164 cri.go:89] found id: ""
	I0920 22:23:21.680574   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.680583   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:21.680589   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:21.680643   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:21.724122   65164 cri.go:89] found id: ""
	I0920 22:23:21.724149   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.724160   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:21.724167   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:21.724224   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:21.758447   65164 cri.go:89] found id: ""
	I0920 22:23:21.758474   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.758482   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:21.758488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:21.758554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:21.799562   65164 cri.go:89] found id: ""
	I0920 22:23:21.799586   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.799594   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:21.799600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:21.799658   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:21.836768   65164 cri.go:89] found id: ""
	I0920 22:23:21.836799   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.836808   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:21.836814   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:21.836871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:21.876750   65164 cri.go:89] found id: ""
	I0920 22:23:21.876770   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.876778   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:21.876784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:21.876873   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:21.912756   65164 cri.go:89] found id: ""
	I0920 22:23:21.912783   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.912792   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:21.912800   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:21.912813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:21.990168   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:21.990191   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:21.990202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:22.094757   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:22.094797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:22.152247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:22.152280   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:22.205505   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:22.205543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:24.720784   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:24.734305   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:24.734369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:24.771142   65164 cri.go:89] found id: ""
	I0920 22:23:24.771171   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.771182   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:24.771190   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:24.771248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:24.807182   65164 cri.go:89] found id: ""
	I0920 22:23:24.807208   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.807216   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:24.807221   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:24.807278   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:24.840805   65164 cri.go:89] found id: ""
	I0920 22:23:24.840836   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.840847   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:24.840854   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:24.840910   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:24.877583   65164 cri.go:89] found id: ""
	I0920 22:23:24.877609   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.877617   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:24.877623   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:24.877677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:24.913485   65164 cri.go:89] found id: ""
	I0920 22:23:24.913518   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.913529   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:24.913536   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:24.913594   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:24.949441   65164 cri.go:89] found id: ""
	I0920 22:23:24.949466   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.949475   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:24.949481   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:24.949527   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:24.985355   65164 cri.go:89] found id: ""
	I0920 22:23:24.985377   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.985386   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:24.985391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:24.985442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:25.021343   65164 cri.go:89] found id: ""
	I0920 22:23:25.021367   65164 logs.go:276] 0 containers: []
	W0920 22:23:25.021375   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:25.021383   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:25.021393   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:25.063475   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:25.063509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:25.113397   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:25.113436   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:25.127946   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:25.127981   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:25.202313   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:25.202334   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:25.202349   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:23.431756   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:25.929621   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:24.778308   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.277297   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:25.865413   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.866883   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.784353   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:27.800363   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:27.800439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:27.861988   65164 cri.go:89] found id: ""
	I0920 22:23:27.862016   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.862026   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:27.862034   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:27.862093   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:27.925087   65164 cri.go:89] found id: ""
	I0920 22:23:27.925114   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.925126   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:27.925133   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:27.925194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:27.959757   65164 cri.go:89] found id: ""
	I0920 22:23:27.959785   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.959796   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:27.959804   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:27.959857   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:27.998853   65164 cri.go:89] found id: ""
	I0920 22:23:27.998881   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.998891   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:27.998898   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:27.998955   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:28.034316   65164 cri.go:89] found id: ""
	I0920 22:23:28.034338   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.034347   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:28.034352   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:28.034402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:28.068834   65164 cri.go:89] found id: ""
	I0920 22:23:28.068857   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.068865   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:28.068871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:28.068921   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:28.103599   65164 cri.go:89] found id: ""
	I0920 22:23:28.103631   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.103642   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:28.103649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:28.103713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:28.139211   65164 cri.go:89] found id: ""
	I0920 22:23:28.139235   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.139243   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:28.139250   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:28.139261   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:28.191788   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:28.191825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:28.208453   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:28.208487   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:28.280469   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:28.280488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:28.280500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:28.357234   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:28.357277   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:27.931829   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.430915   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:29.278181   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:31.777506   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.365534   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:32.366025   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:34.864764   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.903301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:30.916515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:30.916595   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:30.950403   65164 cri.go:89] found id: ""
	I0920 22:23:30.950427   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.950435   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:30.950442   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:30.950488   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:30.987361   65164 cri.go:89] found id: ""
	I0920 22:23:30.987383   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.987392   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:30.987398   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:30.987447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:31.025688   65164 cri.go:89] found id: ""
	I0920 22:23:31.025712   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.025720   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:31.025727   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:31.025789   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:31.064227   65164 cri.go:89] found id: ""
	I0920 22:23:31.064253   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.064261   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:31.064268   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:31.064313   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:31.100029   65164 cri.go:89] found id: ""
	I0920 22:23:31.100059   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.100067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:31.100073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:31.100118   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:31.134290   65164 cri.go:89] found id: ""
	I0920 22:23:31.134328   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.134339   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:31.134346   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:31.134404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:31.174453   65164 cri.go:89] found id: ""
	I0920 22:23:31.174483   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.174493   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:31.174500   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:31.174558   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:31.209768   65164 cri.go:89] found id: ""
	I0920 22:23:31.209792   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.209801   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:31.209812   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:31.209823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:31.288640   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:31.288674   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:31.329681   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:31.329707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:31.388046   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:31.388090   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:31.402872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:31.402900   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:31.476528   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:33.977657   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:33.991331   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:33.991407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:34.027932   65164 cri.go:89] found id: ""
	I0920 22:23:34.027960   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.027970   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:34.027979   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:34.028036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:34.070369   65164 cri.go:89] found id: ""
	I0920 22:23:34.070395   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.070403   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:34.070411   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:34.070471   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:34.107467   65164 cri.go:89] found id: ""
	I0920 22:23:34.107493   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.107501   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:34.107507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:34.107556   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:34.148177   65164 cri.go:89] found id: ""
	I0920 22:23:34.148203   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.148211   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:34.148217   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:34.148274   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:34.185721   65164 cri.go:89] found id: ""
	I0920 22:23:34.185751   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.185762   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:34.185775   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:34.185834   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:34.221089   65164 cri.go:89] found id: ""
	I0920 22:23:34.221110   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.221118   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:34.221124   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:34.221167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:34.256222   65164 cri.go:89] found id: ""
	I0920 22:23:34.256244   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.256252   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:34.256257   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:34.256305   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:34.291640   65164 cri.go:89] found id: ""
	I0920 22:23:34.291665   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.291676   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:34.291688   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:34.291702   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:34.342602   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:34.342642   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:34.356770   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:34.356798   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:34.425461   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:34.425488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:34.425503   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:34.505552   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:34.505591   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:32.932537   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:35.433994   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:33.778173   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:35.778427   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:37.778562   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:36.866421   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:39.364982   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:37.045098   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:37.058954   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:37.059047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:37.094828   65164 cri.go:89] found id: ""
	I0920 22:23:37.094857   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.094867   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:37.094875   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:37.094935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:37.133943   65164 cri.go:89] found id: ""
	I0920 22:23:37.133975   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.133985   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:37.133996   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:37.134056   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:37.171931   65164 cri.go:89] found id: ""
	I0920 22:23:37.171976   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.171988   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:37.171996   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:37.172064   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:37.209342   65164 cri.go:89] found id: ""
	I0920 22:23:37.209367   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.209375   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:37.209381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:37.209431   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:37.250832   65164 cri.go:89] found id: ""
	I0920 22:23:37.250866   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.250877   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:37.250885   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:37.250984   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:37.292507   65164 cri.go:89] found id: ""
	I0920 22:23:37.292533   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.292541   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:37.292547   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:37.292603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:37.326576   65164 cri.go:89] found id: ""
	I0920 22:23:37.326604   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.326612   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:37.326618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:37.326675   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:37.362983   65164 cri.go:89] found id: ""
	I0920 22:23:37.363010   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.363020   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:37.363030   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:37.363043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:37.418526   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:37.418564   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:37.434155   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:37.434180   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:37.521264   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:37.521292   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:37.521303   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:37.603628   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:37.603663   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:40.146295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:40.160861   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:40.160932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:40.194739   65164 cri.go:89] found id: ""
	I0920 22:23:40.194762   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.194769   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:40.194776   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:40.194836   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:40.230753   65164 cri.go:89] found id: ""
	I0920 22:23:40.230781   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.230792   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:40.230800   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:40.230860   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:40.268363   65164 cri.go:89] found id: ""
	I0920 22:23:40.268392   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.268403   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:40.268411   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:40.268468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:40.307882   65164 cri.go:89] found id: ""
	I0920 22:23:40.307916   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.307925   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:40.307932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:40.307993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:40.344334   65164 cri.go:89] found id: ""
	I0920 22:23:40.344367   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.344378   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:40.344385   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:40.344445   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:40.385113   65164 cri.go:89] found id: ""
	I0920 22:23:40.385146   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.385157   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:40.385168   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:40.385269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:40.422532   65164 cri.go:89] found id: ""
	I0920 22:23:40.422557   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.422568   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:40.422576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:40.422635   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:40.463425   65164 cri.go:89] found id: ""
	I0920 22:23:40.463451   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.463462   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:40.463472   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:40.463486   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:40.518314   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:40.518350   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:40.534689   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:40.534718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:40.610484   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:40.610509   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:40.610525   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:40.697040   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:40.697076   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:37.930996   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:40.429855   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:42.431458   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:40.279291   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:42.778158   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:41.366031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:43.367298   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:43.240502   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:43.254888   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:43.254989   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:43.294646   65164 cri.go:89] found id: ""
	I0920 22:23:43.294672   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.294686   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:43.294699   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:43.294763   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:43.330802   65164 cri.go:89] found id: ""
	I0920 22:23:43.330828   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.330839   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:43.330846   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:43.330911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:43.369668   65164 cri.go:89] found id: ""
	I0920 22:23:43.369692   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.369702   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:43.369710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:43.369780   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:43.409992   65164 cri.go:89] found id: ""
	I0920 22:23:43.410032   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.410045   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:43.410054   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:43.410116   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:43.451838   65164 cri.go:89] found id: ""
	I0920 22:23:43.451870   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.451882   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:43.451889   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:43.451945   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:43.489973   65164 cri.go:89] found id: ""
	I0920 22:23:43.490010   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.490021   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:43.490027   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:43.490087   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:43.529027   65164 cri.go:89] found id: ""
	I0920 22:23:43.529051   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.529060   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:43.529066   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:43.529121   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:43.567324   65164 cri.go:89] found id: ""
	I0920 22:23:43.567354   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.567366   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:43.567377   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:43.567390   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:43.618859   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:43.618899   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:43.632929   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:43.632970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:43.713084   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:43.713108   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:43.713123   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:43.795977   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:43.796018   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:44.931817   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:46.940596   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:44.778817   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:47.277419   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:45.866458   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:48.365654   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:46.337737   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:46.352031   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:46.352117   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:46.391292   65164 cri.go:89] found id: ""
	I0920 22:23:46.391316   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.391328   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:46.391335   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:46.391397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:46.432554   65164 cri.go:89] found id: ""
	I0920 22:23:46.432576   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.432584   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:46.432590   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:46.432669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:46.469609   65164 cri.go:89] found id: ""
	I0920 22:23:46.469638   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.469649   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:46.469657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:46.469718   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:46.509671   65164 cri.go:89] found id: ""
	I0920 22:23:46.509702   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.509714   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:46.509721   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:46.509791   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:46.549707   65164 cri.go:89] found id: ""
	I0920 22:23:46.549737   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.549747   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:46.549755   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:46.549825   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:46.591886   65164 cri.go:89] found id: ""
	I0920 22:23:46.591914   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.591924   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:46.591931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:46.591982   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:46.629358   65164 cri.go:89] found id: ""
	I0920 22:23:46.629384   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.629391   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:46.629397   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:46.629447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:46.664854   65164 cri.go:89] found id: ""
	I0920 22:23:46.664880   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.664888   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:46.664897   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:46.664908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:46.706941   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:46.706994   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:46.760617   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:46.760651   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:46.774526   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:46.774552   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:46.841947   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:46.841977   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:46.841995   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.423720   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:49.439375   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:49.439466   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:49.476032   65164 cri.go:89] found id: ""
	I0920 22:23:49.476056   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.476064   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:49.476070   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:49.476129   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:49.513099   65164 cri.go:89] found id: ""
	I0920 22:23:49.513128   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.513138   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:49.513144   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:49.513194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:49.551645   65164 cri.go:89] found id: ""
	I0920 22:23:49.551671   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.551682   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:49.551689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:49.551750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:49.592269   65164 cri.go:89] found id: ""
	I0920 22:23:49.592295   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.592305   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:49.592312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:49.592371   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:49.629934   65164 cri.go:89] found id: ""
	I0920 22:23:49.629963   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.629971   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:49.629978   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:49.630033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:49.668013   65164 cri.go:89] found id: ""
	I0920 22:23:49.668041   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.668051   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:49.668058   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:49.668119   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:49.706531   65164 cri.go:89] found id: ""
	I0920 22:23:49.706561   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.706572   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:49.706580   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:49.706644   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:49.744130   65164 cri.go:89] found id: ""
	I0920 22:23:49.744168   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.744180   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:49.744192   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:49.744212   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:49.814987   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:49.815015   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:49.815031   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.897468   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:49.897505   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:49.936778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:49.936804   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:49.990840   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:49.990884   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:49.430203   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:51.930280   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:49.777628   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.278385   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:50.864440   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.865694   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.505554   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:52.520975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:52.521043   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:52.565302   65164 cri.go:89] found id: ""
	I0920 22:23:52.565331   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.565340   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:52.565346   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:52.565404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:52.606315   65164 cri.go:89] found id: ""
	I0920 22:23:52.606342   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.606352   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:52.606359   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:52.606416   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:52.641533   65164 cri.go:89] found id: ""
	I0920 22:23:52.641560   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.641574   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:52.641581   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:52.641640   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:52.677372   65164 cri.go:89] found id: ""
	I0920 22:23:52.677400   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.677412   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:52.677419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:52.677479   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:52.714016   65164 cri.go:89] found id: ""
	I0920 22:23:52.714056   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.714067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:52.714075   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:52.714131   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:52.748194   65164 cri.go:89] found id: ""
	I0920 22:23:52.748224   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.748235   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:52.748243   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:52.748306   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:52.785329   65164 cri.go:89] found id: ""
	I0920 22:23:52.785356   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.785366   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:52.785373   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:52.785430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:52.818350   65164 cri.go:89] found id: ""
	I0920 22:23:52.818382   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.818401   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:52.818414   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:52.818429   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:52.862819   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:52.862854   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:52.914773   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:52.914811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:52.928560   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:52.928589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:53.005530   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:53.005553   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:53.005568   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:55.589457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:55.604545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:55.604634   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:55.643361   65164 cri.go:89] found id: ""
	I0920 22:23:55.643385   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.643394   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:55.643404   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:55.643458   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:55.678738   65164 cri.go:89] found id: ""
	I0920 22:23:55.678762   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.678770   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:55.678775   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:55.678820   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:55.714107   65164 cri.go:89] found id: ""
	I0920 22:23:55.714137   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.714153   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:55.714160   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:55.714221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:55.749093   65164 cri.go:89] found id: ""
	I0920 22:23:55.749118   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.749127   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:55.749135   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:55.749194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:55.786138   65164 cri.go:89] found id: ""
	I0920 22:23:55.786166   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.786177   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:55.786185   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:55.786243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:55.820406   65164 cri.go:89] found id: ""
	I0920 22:23:55.820431   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.820440   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:55.820446   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:55.820509   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:55.854792   65164 cri.go:89] found id: ""
	I0920 22:23:55.854820   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.854829   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:55.854834   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:55.854889   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:53.931782   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:56.430505   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:54.278800   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:56.779261   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:55.368778   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:57.865901   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:55.891926   65164 cri.go:89] found id: ""
	I0920 22:23:55.891957   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.891967   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:55.891977   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:55.891991   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:55.947642   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:55.947677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:55.961872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:55.961903   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:56.038004   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:56.038035   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:56.038049   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:56.129637   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:56.129682   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:58.670325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:58.685501   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:58.685564   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:58.728232   65164 cri.go:89] found id: ""
	I0920 22:23:58.728262   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.728274   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:58.728282   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:58.728339   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:58.762617   65164 cri.go:89] found id: ""
	I0920 22:23:58.762647   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.762657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:58.762665   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:58.762727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:58.799427   65164 cri.go:89] found id: ""
	I0920 22:23:58.799454   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.799464   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:58.799471   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:58.799533   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:58.840100   65164 cri.go:89] found id: ""
	I0920 22:23:58.840126   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.840133   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:58.840140   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:58.840202   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:58.879097   65164 cri.go:89] found id: ""
	I0920 22:23:58.879123   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.879131   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:58.879137   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:58.879185   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:58.914839   65164 cri.go:89] found id: ""
	I0920 22:23:58.914879   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.914889   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:58.914897   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:58.914954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:58.957890   65164 cri.go:89] found id: ""
	I0920 22:23:58.957917   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.957925   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:58.957931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:58.957992   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:58.996355   65164 cri.go:89] found id: ""
	I0920 22:23:58.996380   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.996390   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:58.996400   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:58.996419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:59.010139   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:59.010162   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:59.083543   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:59.083567   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:59.083584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:59.158430   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:59.158470   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:59.199488   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:59.199515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:58.430584   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:00.431120   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:59.278671   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:01.777131   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:00.366154   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:02.866892   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:01.755931   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:01.770610   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:01.770684   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:01.806498   65164 cri.go:89] found id: ""
	I0920 22:24:01.806532   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.806544   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:01.806552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:01.806611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:01.849461   65164 cri.go:89] found id: ""
	I0920 22:24:01.849495   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.849505   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:01.849512   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:01.849570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:01.886634   65164 cri.go:89] found id: ""
	I0920 22:24:01.886674   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.886686   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:01.886693   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:01.886756   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:01.921036   65164 cri.go:89] found id: ""
	I0920 22:24:01.921064   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.921072   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:01.921079   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:01.921138   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:01.959066   65164 cri.go:89] found id: ""
	I0920 22:24:01.959097   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.959109   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:01.959116   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:01.959182   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:01.997105   65164 cri.go:89] found id: ""
	I0920 22:24:01.997135   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.997146   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:01.997155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:01.997215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:02.039220   65164 cri.go:89] found id: ""
	I0920 22:24:02.039245   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.039254   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:02.039259   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:02.039318   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:02.080646   65164 cri.go:89] found id: ""
	I0920 22:24:02.080675   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.080684   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:02.080691   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:02.080704   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:02.132408   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:02.132442   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:02.147039   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:02.147065   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:02.226357   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:02.226383   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:02.226402   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:02.310492   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:02.310527   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:04.854325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:04.869319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:04.869386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:04.904951   65164 cri.go:89] found id: ""
	I0920 22:24:04.904981   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.904992   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:04.905001   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:04.905062   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:04.942283   65164 cri.go:89] found id: ""
	I0920 22:24:04.942307   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.942314   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:04.942319   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:04.942366   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:04.981814   65164 cri.go:89] found id: ""
	I0920 22:24:04.981841   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.981851   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:04.981858   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:04.981914   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:05.016282   65164 cri.go:89] found id: ""
	I0920 22:24:05.016307   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.016316   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:05.016321   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:05.016369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:05.059185   65164 cri.go:89] found id: ""
	I0920 22:24:05.059215   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.059226   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:05.059233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:05.059286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:05.097814   65164 cri.go:89] found id: ""
	I0920 22:24:05.097836   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.097845   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:05.097851   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:05.097904   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:05.133890   65164 cri.go:89] found id: ""
	I0920 22:24:05.133918   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.133926   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:05.133932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:05.133993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:05.170616   65164 cri.go:89] found id: ""
	I0920 22:24:05.170641   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.170649   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:05.170657   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:05.170668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:05.221889   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:05.221921   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:05.235836   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:05.235863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:05.312145   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:05.312172   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:05.312188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:05.397092   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:05.397128   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:02.930186   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:04.931380   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.429840   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:03.778224   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:06.278765   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:08.279756   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:05.365583   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.865287   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:09.865560   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.938892   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:07.954037   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:07.954103   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:07.992847   65164 cri.go:89] found id: ""
	I0920 22:24:07.992872   65164 logs.go:276] 0 containers: []
	W0920 22:24:07.992880   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:07.992887   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:07.992971   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:08.030102   65164 cri.go:89] found id: ""
	I0920 22:24:08.030136   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.030148   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:08.030155   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:08.030215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:08.071156   65164 cri.go:89] found id: ""
	I0920 22:24:08.071187   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.071202   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:08.071209   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:08.071264   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:08.111668   65164 cri.go:89] found id: ""
	I0920 22:24:08.111699   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.111712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:08.111719   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:08.111777   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:08.148289   65164 cri.go:89] found id: ""
	I0920 22:24:08.148322   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.148333   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:08.148340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:08.148402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:08.189154   65164 cri.go:89] found id: ""
	I0920 22:24:08.189176   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.189184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:08.189191   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:08.189242   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:08.225338   65164 cri.go:89] found id: ""
	I0920 22:24:08.225363   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.225374   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:08.225381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:08.225439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:08.274986   65164 cri.go:89] found id: ""
	I0920 22:24:08.275015   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.275026   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:08.275036   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:08.275052   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:08.376769   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:08.376801   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:08.376819   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:08.455971   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:08.456008   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:08.496996   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:08.497026   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:08.549634   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:08.549669   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:09.930508   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:12.430716   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:10.778077   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:13.278116   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:12.365884   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:14.866612   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:11.065317   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:11.079438   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:11.079521   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:11.113628   65164 cri.go:89] found id: ""
	I0920 22:24:11.113657   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.113673   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:11.113680   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:11.113741   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:11.147698   65164 cri.go:89] found id: ""
	I0920 22:24:11.147727   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.147738   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:11.147745   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:11.147810   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:11.185153   65164 cri.go:89] found id: ""
	I0920 22:24:11.185183   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.185194   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:11.185202   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:11.185261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:11.220575   65164 cri.go:89] found id: ""
	I0920 22:24:11.220602   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.220611   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:11.220619   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:11.220677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:11.257062   65164 cri.go:89] found id: ""
	I0920 22:24:11.257095   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.257107   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:11.257117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:11.257174   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:11.295945   65164 cri.go:89] found id: ""
	I0920 22:24:11.295972   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.295982   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:11.295987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:11.296037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:11.335049   65164 cri.go:89] found id: ""
	I0920 22:24:11.335072   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.335080   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:11.335085   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:11.335134   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:11.377047   65164 cri.go:89] found id: ""
	I0920 22:24:11.377074   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.377085   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:11.377094   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:11.377108   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:11.430338   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:11.430364   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:11.445225   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:11.445249   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:11.517988   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:11.518010   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:11.518024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:11.596776   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:11.596813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:14.140982   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:14.158445   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:14.158524   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:14.208478   65164 cri.go:89] found id: ""
	I0920 22:24:14.208508   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.208518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:14.208526   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:14.208585   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:14.266224   65164 cri.go:89] found id: ""
	I0920 22:24:14.266256   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.266268   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:14.266276   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:14.266334   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:14.309917   65164 cri.go:89] found id: ""
	I0920 22:24:14.309943   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.309954   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:14.309968   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:14.310026   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:14.348491   65164 cri.go:89] found id: ""
	I0920 22:24:14.348516   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.348523   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:14.348530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:14.348575   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:14.389170   65164 cri.go:89] found id: ""
	I0920 22:24:14.389197   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.389205   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:14.389212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:14.389261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:14.426006   65164 cri.go:89] found id: ""
	I0920 22:24:14.426034   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.426045   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:14.426052   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:14.426113   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:14.461496   65164 cri.go:89] found id: ""
	I0920 22:24:14.461519   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.461526   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:14.461532   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:14.461588   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:14.501424   65164 cri.go:89] found id: ""
	I0920 22:24:14.501457   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.501474   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:14.501486   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:14.501499   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:14.550920   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:14.550954   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:14.565293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:14.565322   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:14.638031   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:14.638059   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:14.638071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:14.715631   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:14.715697   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:14.430748   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:16.930571   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:15.777268   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.778385   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.365248   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:19.366168   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.256947   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:17.269939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:17.270009   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:17.305349   65164 cri.go:89] found id: ""
	I0920 22:24:17.305373   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.305384   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:17.305391   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:17.305453   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:17.347917   65164 cri.go:89] found id: ""
	I0920 22:24:17.347947   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.347959   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:17.347967   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:17.348024   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:17.383366   65164 cri.go:89] found id: ""
	I0920 22:24:17.383387   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.383396   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:17.383402   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:17.383462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:17.418055   65164 cri.go:89] found id: ""
	I0920 22:24:17.418079   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.418086   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:17.418092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:17.418156   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:17.457651   65164 cri.go:89] found id: ""
	I0920 22:24:17.457680   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.457691   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:17.457698   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:17.457758   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:17.492527   65164 cri.go:89] found id: ""
	I0920 22:24:17.492557   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.492568   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:17.492576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:17.492627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:17.528964   65164 cri.go:89] found id: ""
	I0920 22:24:17.528993   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.529004   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:17.529017   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:17.529080   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:17.565286   65164 cri.go:89] found id: ""
	I0920 22:24:17.565315   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.565326   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:17.565336   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:17.565351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:17.617255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:17.617290   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:17.631897   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:17.631925   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:17.700829   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:17.700851   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:17.700863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:17.782597   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:17.782633   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.327305   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:20.340641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:20.340731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:20.378129   65164 cri.go:89] found id: ""
	I0920 22:24:20.378156   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.378167   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:20.378174   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:20.378228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:20.414202   65164 cri.go:89] found id: ""
	I0920 22:24:20.414230   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.414240   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:20.414247   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:20.414307   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:20.451599   65164 cri.go:89] found id: ""
	I0920 22:24:20.451631   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.451641   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:20.451649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:20.451708   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:20.486998   65164 cri.go:89] found id: ""
	I0920 22:24:20.487026   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.487034   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:20.487040   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:20.487091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:20.525623   65164 cri.go:89] found id: ""
	I0920 22:24:20.525651   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.525661   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:20.525669   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:20.525727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:20.560930   65164 cri.go:89] found id: ""
	I0920 22:24:20.560956   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.560967   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:20.560975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:20.561037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:20.598358   65164 cri.go:89] found id: ""
	I0920 22:24:20.598385   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.598396   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:20.598403   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:20.598468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:20.633645   65164 cri.go:89] found id: ""
	I0920 22:24:20.633671   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.633683   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:20.633694   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:20.633707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.678847   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:20.678880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:20.731758   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:20.731803   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:20.747298   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:20.747328   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:20.822433   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:20.822459   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:20.822473   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:18.932797   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:21.430477   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:19.778546   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:22.277786   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:21.866777   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:24.365161   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:23.401037   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:23.414055   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:23.414135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:23.452483   65164 cri.go:89] found id: ""
	I0920 22:24:23.452511   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.452522   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:23.452529   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:23.452587   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:23.491951   65164 cri.go:89] found id: ""
	I0920 22:24:23.491979   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.491987   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:23.491993   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:23.492039   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:23.533381   65164 cri.go:89] found id: ""
	I0920 22:24:23.533413   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.533424   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:23.533432   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:23.533492   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:23.572442   65164 cri.go:89] found id: ""
	I0920 22:24:23.572475   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.572485   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:23.572493   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:23.572554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:23.609161   65164 cri.go:89] found id: ""
	I0920 22:24:23.609189   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.609200   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:23.609207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:23.609272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:23.645692   65164 cri.go:89] found id: ""
	I0920 22:24:23.645721   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.645730   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:23.645737   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:23.645824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:23.681332   65164 cri.go:89] found id: ""
	I0920 22:24:23.681355   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.681363   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:23.681368   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:23.681414   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:23.716497   65164 cri.go:89] found id: ""
	I0920 22:24:23.716524   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.716532   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:23.716541   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:23.716551   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:23.794446   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:23.794485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:23.839598   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:23.839625   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:23.893259   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:23.893295   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:23.907430   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:23.907456   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:23.982193   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:23.431805   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:25.430647   64543 pod_ready.go:82] duration metric: took 4m0.006693945s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	E0920 22:24:25.430668   64543 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 22:24:25.430675   64543 pod_ready.go:39] duration metric: took 4m7.427029914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:24:25.430689   64543 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:24:25.430712   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:25.430753   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:25.478295   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:25.478324   64543 cri.go:89] found id: ""
	I0920 22:24:25.478334   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:25.478395   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.483530   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:25.483600   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:25.525758   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:25.525791   64543 cri.go:89] found id: ""
	I0920 22:24:25.525801   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:25.525855   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.530200   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:25.530272   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:25.581889   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:25.581911   64543 cri.go:89] found id: ""
	I0920 22:24:25.581920   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:25.581976   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.587886   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:25.587948   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:25.634412   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:25.634438   64543 cri.go:89] found id: ""
	I0920 22:24:25.634448   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:25.634508   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.638844   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:25.638907   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:25.674015   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:25.674040   64543 cri.go:89] found id: ""
	I0920 22:24:25.674049   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:25.674109   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.678865   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:25.678935   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:25.715353   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:25.715380   64543 cri.go:89] found id: ""
	I0920 22:24:25.715390   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:25.715445   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.719967   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:25.720038   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:25.759899   64543 cri.go:89] found id: ""
	I0920 22:24:25.759925   64543 logs.go:276] 0 containers: []
	W0920 22:24:25.759933   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:25.759938   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:25.759996   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:25.805111   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:25.805139   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:25.805146   64543 cri.go:89] found id: ""
	I0920 22:24:25.805154   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:25.805225   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.809639   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.813697   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:25.813722   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:25.853582   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:25.853610   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:25.893403   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:25.893450   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:25.939559   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:25.939601   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:25.957288   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:25.957322   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:26.007398   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:26.007427   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:26.048801   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:26.048828   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:26.105292   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:26.105327   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:26.141333   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:26.141367   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:26.659413   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:26.659458   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:26.719990   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:26.720018   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:26.797434   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:26.797480   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:26.956602   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:26.956645   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:24.278169   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.278275   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:28.278844   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.365203   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:28.366087   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.482877   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:26.497511   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:26.497590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:26.534711   65164 cri.go:89] found id: ""
	I0920 22:24:26.534732   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.534739   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:26.534745   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:26.534790   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:26.569664   65164 cri.go:89] found id: ""
	I0920 22:24:26.569696   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.569706   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:26.569714   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:26.569772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:26.610623   65164 cri.go:89] found id: ""
	I0920 22:24:26.610651   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.610663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:26.610671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:26.610726   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:26.648337   65164 cri.go:89] found id: ""
	I0920 22:24:26.648368   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.648379   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:26.648387   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:26.648446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:26.687633   65164 cri.go:89] found id: ""
	I0920 22:24:26.687661   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.687673   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:26.687680   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:26.687752   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:26.724361   65164 cri.go:89] found id: ""
	I0920 22:24:26.724390   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.724410   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:26.724419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:26.724475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:26.766101   65164 cri.go:89] found id: ""
	I0920 22:24:26.766127   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.766137   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:26.766144   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:26.766196   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:26.805844   65164 cri.go:89] found id: ""
	I0920 22:24:26.805872   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.805883   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:26.805894   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:26.805907   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:26.880703   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:26.880725   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:26.880740   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:26.967599   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:26.967635   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:27.017359   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:27.017384   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:27.071335   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:27.071375   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:29.588374   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:29.602233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:29.602308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:29.639528   65164 cri.go:89] found id: ""
	I0920 22:24:29.639551   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.639562   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:29.639570   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:29.639626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:29.677766   65164 cri.go:89] found id: ""
	I0920 22:24:29.677794   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.677810   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:29.677817   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:29.677874   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:29.714616   65164 cri.go:89] found id: ""
	I0920 22:24:29.714652   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.714663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:29.714671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:29.714722   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:29.750760   65164 cri.go:89] found id: ""
	I0920 22:24:29.750787   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.750795   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:29.750802   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:29.750862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:29.791014   65164 cri.go:89] found id: ""
	I0920 22:24:29.791046   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.791057   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:29.791064   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:29.791135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:29.827114   65164 cri.go:89] found id: ""
	I0920 22:24:29.827146   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.827158   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:29.827166   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:29.827221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:29.866063   65164 cri.go:89] found id: ""
	I0920 22:24:29.866087   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.866097   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:29.866105   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:29.866159   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:29.905616   65164 cri.go:89] found id: ""
	I0920 22:24:29.905646   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.905657   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:29.905668   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:29.905681   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:29.990749   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:29.990787   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:30.030931   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:30.030975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:30.084404   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:30.084440   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:30.100878   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:30.100908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:30.175423   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:29.517258   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:29.535431   64543 api_server.go:72] duration metric: took 4m18.790198719s to wait for apiserver process to appear ...
	I0920 22:24:29.535457   64543 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:24:29.535493   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:29.535549   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:29.581802   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:29.581828   64543 cri.go:89] found id: ""
	I0920 22:24:29.581836   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:29.581890   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.590758   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:29.590827   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:29.636548   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:29.636574   64543 cri.go:89] found id: ""
	I0920 22:24:29.636583   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:29.636644   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.641309   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:29.641378   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:29.686989   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:29.687017   64543 cri.go:89] found id: ""
	I0920 22:24:29.687026   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:29.687086   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.691971   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:29.692056   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:29.733852   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:29.733880   64543 cri.go:89] found id: ""
	I0920 22:24:29.733890   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:29.733954   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.738246   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:29.738322   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:29.786495   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:29.786520   64543 cri.go:89] found id: ""
	I0920 22:24:29.786527   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:29.786580   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.791112   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:29.791170   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:29.836075   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:29.836098   64543 cri.go:89] found id: ""
	I0920 22:24:29.836106   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:29.836150   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.840577   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:29.840652   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:29.882111   64543 cri.go:89] found id: ""
	I0920 22:24:29.882142   64543 logs.go:276] 0 containers: []
	W0920 22:24:29.882154   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:29.882161   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:29.882247   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:29.926138   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:29.926163   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:29.926169   64543 cri.go:89] found id: ""
	I0920 22:24:29.926177   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:29.926235   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.930772   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.934923   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:29.934952   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:30.052134   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:30.052171   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:30.101772   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:30.101795   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:30.143728   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:30.143760   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:30.189176   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:30.189204   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:30.246393   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:30.246423   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:30.289832   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:30.289864   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:30.712613   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:30.712648   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:30.783930   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:30.783962   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:30.838078   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:30.838112   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:30.889200   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:30.889235   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:30.924455   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:30.924492   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:30.961916   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:30.961945   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:30.278927   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.777947   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:30.867260   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.868094   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.675815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:32.689449   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:32.689517   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:32.725045   65164 cri.go:89] found id: ""
	I0920 22:24:32.725072   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.725080   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:32.725086   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:32.725139   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:32.760529   65164 cri.go:89] found id: ""
	I0920 22:24:32.760555   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.760566   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:32.760572   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:32.760629   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:32.795500   65164 cri.go:89] found id: ""
	I0920 22:24:32.795526   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.795536   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:32.795543   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:32.795604   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:32.834854   65164 cri.go:89] found id: ""
	I0920 22:24:32.834884   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.834895   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:32.834904   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:32.834972   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:32.875928   65164 cri.go:89] found id: ""
	I0920 22:24:32.875956   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.875967   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:32.875975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:32.876038   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:32.910115   65164 cri.go:89] found id: ""
	I0920 22:24:32.910140   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.910149   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:32.910155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:32.910204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:32.944790   65164 cri.go:89] found id: ""
	I0920 22:24:32.944817   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.944828   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:32.944835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:32.944896   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:32.980012   65164 cri.go:89] found id: ""
	I0920 22:24:32.980043   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.980054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:32.980064   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:32.980077   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:33.032827   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:33.032864   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:33.047478   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:33.047507   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:33.124253   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:33.124280   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:33.124294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:33.200625   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:33.200666   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:35.740556   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:35.755559   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:35.755626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:35.796164   65164 cri.go:89] found id: ""
	I0920 22:24:35.796191   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.796200   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:35.796211   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:35.796260   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:35.832565   65164 cri.go:89] found id: ""
	I0920 22:24:35.832591   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.832601   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:35.832609   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:35.832678   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:33.481353   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:24:33.486802   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 200:
	ok
	I0920 22:24:33.487816   64543 api_server.go:141] control plane version: v1.31.1
	I0920 22:24:33.487835   64543 api_server.go:131] duration metric: took 3.952371649s to wait for apiserver health ...
	I0920 22:24:33.487842   64543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:24:33.487865   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:33.487917   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:33.528958   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:33.528989   64543 cri.go:89] found id: ""
	I0920 22:24:33.528999   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:33.529061   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.533727   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:33.533802   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:33.573912   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:33.573937   64543 cri.go:89] found id: ""
	I0920 22:24:33.573947   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:33.574016   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.578918   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:33.578998   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:33.628384   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:33.628416   64543 cri.go:89] found id: ""
	I0920 22:24:33.628427   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:33.628490   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.632875   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:33.632955   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:33.674561   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:33.674588   64543 cri.go:89] found id: ""
	I0920 22:24:33.674597   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:33.674654   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.679984   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:33.680055   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:33.715881   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:33.715906   64543 cri.go:89] found id: ""
	I0920 22:24:33.715914   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:33.715963   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.720339   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:33.720408   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:33.764661   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:33.764680   64543 cri.go:89] found id: ""
	I0920 22:24:33.764687   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:33.764734   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.769479   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:33.769533   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:33.810893   64543 cri.go:89] found id: ""
	I0920 22:24:33.810920   64543 logs.go:276] 0 containers: []
	W0920 22:24:33.810932   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:33.810950   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:33.811023   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:33.848882   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:33.848904   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:33.848908   64543 cri.go:89] found id: ""
	I0920 22:24:33.848914   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:33.848964   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.853597   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.858054   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:33.858074   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:33.908602   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:33.908636   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:33.953823   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:33.953850   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:33.993841   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:33.993868   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:34.009784   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:34.009808   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:34.134468   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:34.134496   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:34.178737   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:34.178772   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:34.218475   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:34.218519   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:34.261246   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:34.261272   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:34.324980   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:34.325018   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:34.364223   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:34.364249   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:34.730213   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:34.730262   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:34.808747   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:34.808786   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:37.357880   64543 system_pods.go:59] 8 kube-system pods found
	I0920 22:24:37.357914   64543 system_pods.go:61] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running
	I0920 22:24:37.357919   64543 system_pods.go:61] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running
	I0920 22:24:37.357923   64543 system_pods.go:61] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running
	I0920 22:24:37.357928   64543 system_pods.go:61] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running
	I0920 22:24:37.357933   64543 system_pods.go:61] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:24:37.357938   64543 system_pods.go:61] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running
	I0920 22:24:37.357946   64543 system_pods.go:61] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:24:37.357960   64543 system_pods.go:61] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:24:37.357969   64543 system_pods.go:74] duration metric: took 3.870121418s to wait for pod list to return data ...
	I0920 22:24:37.357979   64543 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:24:37.361128   64543 default_sa.go:45] found service account: "default"
	I0920 22:24:37.361152   64543 default_sa.go:55] duration metric: took 3.165479ms for default service account to be created ...
	I0920 22:24:37.361160   64543 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:24:37.366724   64543 system_pods.go:86] 8 kube-system pods found
	I0920 22:24:37.366755   64543 system_pods.go:89] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running
	I0920 22:24:37.366764   64543 system_pods.go:89] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running
	I0920 22:24:37.366771   64543 system_pods.go:89] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running
	I0920 22:24:37.366777   64543 system_pods.go:89] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running
	I0920 22:24:37.366781   64543 system_pods.go:89] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:24:37.366785   64543 system_pods.go:89] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running
	I0920 22:24:37.366791   64543 system_pods.go:89] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:24:37.366795   64543 system_pods.go:89] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:24:37.366803   64543 system_pods.go:126] duration metric: took 5.637987ms to wait for k8s-apps to be running ...
	I0920 22:24:37.366809   64543 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:24:37.366865   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:24:37.385292   64543 system_svc.go:56] duration metric: took 18.47174ms WaitForService to wait for kubelet
	I0920 22:24:37.385326   64543 kubeadm.go:582] duration metric: took 4m26.64009895s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:24:37.385352   64543 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:24:37.388635   64543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:24:37.388661   64543 node_conditions.go:123] node cpu capacity is 2
	I0920 22:24:37.388675   64543 node_conditions.go:105] duration metric: took 3.31714ms to run NodePressure ...
	I0920 22:24:37.388690   64543 start.go:241] waiting for startup goroutines ...
	I0920 22:24:37.388699   64543 start.go:246] waiting for cluster config update ...
	I0920 22:24:37.388717   64543 start.go:255] writing updated cluster config ...
	I0920 22:24:37.389073   64543 ssh_runner.go:195] Run: rm -f paused
	I0920 22:24:37.442347   64543 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:24:37.443855   64543 out.go:177] * Done! kubectl is now configured to use "embed-certs-562750" cluster and "default" namespace by default
	I0920 22:24:34.779041   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:36.779446   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:35.365819   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:37.865951   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:35.868115   65164 cri.go:89] found id: ""
	I0920 22:24:35.868138   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.868145   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:35.868152   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:35.868199   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:35.902417   65164 cri.go:89] found id: ""
	I0920 22:24:35.902447   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.902457   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:35.902465   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:35.902531   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:35.939195   65164 cri.go:89] found id: ""
	I0920 22:24:35.939226   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.939242   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:35.939256   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:35.939317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:35.981364   65164 cri.go:89] found id: ""
	I0920 22:24:35.981387   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.981397   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:35.981405   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:35.981462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:36.019938   65164 cri.go:89] found id: ""
	I0920 22:24:36.019968   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.019979   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:36.019987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:36.020049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:36.058764   65164 cri.go:89] found id: ""
	I0920 22:24:36.058788   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.058795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:36.058803   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:36.058812   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:36.098442   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:36.098468   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:36.152150   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:36.152188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:36.168940   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:36.168972   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:36.242112   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:36.242134   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:36.242148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:38.843420   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:38.857665   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:38.857746   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:38.901473   65164 cri.go:89] found id: ""
	I0920 22:24:38.901507   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.901518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:38.901532   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:38.901591   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:38.937980   65164 cri.go:89] found id: ""
	I0920 22:24:38.938012   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.938022   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:38.938030   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:38.938085   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:38.973476   65164 cri.go:89] found id: ""
	I0920 22:24:38.973506   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.973514   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:38.973520   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:38.973579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:39.009144   65164 cri.go:89] found id: ""
	I0920 22:24:39.009177   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.009197   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:39.009205   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:39.009282   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:39.046471   65164 cri.go:89] found id: ""
	I0920 22:24:39.046495   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.046505   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:39.046513   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:39.046570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:39.080274   65164 cri.go:89] found id: ""
	I0920 22:24:39.080303   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.080315   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:39.080323   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:39.080381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:39.117267   65164 cri.go:89] found id: ""
	I0920 22:24:39.117297   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.117307   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:39.117314   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:39.117381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:39.158015   65164 cri.go:89] found id: ""
	I0920 22:24:39.158043   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.158054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:39.158065   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:39.158079   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:39.210324   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:39.210360   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:39.226827   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:39.226857   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:39.299060   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:39.299088   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:39.299157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:39.386377   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:39.386420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:39.277971   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:41.278082   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:43.278136   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:40.365418   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:41.359063   64930 pod_ready.go:82] duration metric: took 4m0.000372202s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" ...
	E0920 22:24:41.359104   64930 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 22:24:41.359120   64930 pod_ready.go:39] duration metric: took 4m12.013463696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:24:41.359144   64930 kubeadm.go:597] duration metric: took 4m19.989072639s to restartPrimaryControlPlane
	W0920 22:24:41.359208   64930 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:24:41.359236   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:24:41.925762   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:41.939840   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:41.939911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:41.979910   65164 cri.go:89] found id: ""
	I0920 22:24:41.979943   65164 logs.go:276] 0 containers: []
	W0920 22:24:41.979964   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:41.979973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:41.980036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:42.018711   65164 cri.go:89] found id: ""
	I0920 22:24:42.018738   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.018747   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:42.018752   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:42.018796   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:42.056718   65164 cri.go:89] found id: ""
	I0920 22:24:42.056741   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.056749   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:42.056756   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:42.056826   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:42.093609   65164 cri.go:89] found id: ""
	I0920 22:24:42.093639   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.093650   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:42.093657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:42.093729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:42.130129   65164 cri.go:89] found id: ""
	I0920 22:24:42.130157   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.130167   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:42.130175   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:42.130233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:42.166747   65164 cri.go:89] found id: ""
	I0920 22:24:42.166796   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.166807   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:42.166815   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:42.166875   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:42.213926   65164 cri.go:89] found id: ""
	I0920 22:24:42.213960   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.213971   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:42.213982   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:42.214047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:42.251645   65164 cri.go:89] found id: ""
	I0920 22:24:42.251671   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.251681   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:42.251693   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:42.251707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:42.265133   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:42.265165   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:42.340018   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:42.340044   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:42.340058   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:42.435006   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:42.435056   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:42.478991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:42.479022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:45.032066   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:45.048658   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:45.048734   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:45.086708   65164 cri.go:89] found id: ""
	I0920 22:24:45.086733   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.086743   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:45.086750   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:45.086815   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:45.124311   65164 cri.go:89] found id: ""
	I0920 22:24:45.124342   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.124353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:45.124360   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:45.124407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:45.164253   65164 cri.go:89] found id: ""
	I0920 22:24:45.164283   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.164293   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:45.164301   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:45.164364   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:45.206382   65164 cri.go:89] found id: ""
	I0920 22:24:45.206412   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.206421   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:45.206429   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:45.206494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:45.241949   65164 cri.go:89] found id: ""
	I0920 22:24:45.241977   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.241986   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:45.241991   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:45.242052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:45.283718   65164 cri.go:89] found id: ""
	I0920 22:24:45.283743   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.283752   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:45.283757   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:45.283812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:45.331523   65164 cri.go:89] found id: ""
	I0920 22:24:45.331564   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.331576   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:45.331585   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:45.331672   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:45.372428   65164 cri.go:89] found id: ""
	I0920 22:24:45.372459   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.372469   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:45.372479   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:45.372496   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:45.387751   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:45.387786   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:45.470354   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:45.470377   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:45.470392   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:45.557865   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:45.557902   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:45.600123   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:45.600153   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:45.278256   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:47.278977   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:48.153277   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:48.167796   65164 kubeadm.go:597] duration metric: took 4m4.375477562s to restartPrimaryControlPlane
	W0920 22:24:48.167875   65164 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:24:48.167906   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:24:48.626343   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:24:48.644488   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:24:48.655891   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:24:48.666885   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:24:48.666910   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:24:48.666985   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:24:48.676979   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:24:48.677048   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:24:48.687890   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:24:48.697890   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:24:48.697955   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:24:48.708412   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.719024   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:24:48.719093   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.729321   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:24:48.739258   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:24:48.739329   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:24:48.749787   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:24:48.825173   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:24:48.825258   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:24:48.982428   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:24:48.982587   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:24:48.982721   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:24:49.194912   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:24:49.196899   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:24:49.196999   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:24:49.197078   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:24:49.197222   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:24:49.197328   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:24:49.197421   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:24:49.197495   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:24:49.197574   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:24:49.197662   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:24:49.197761   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:24:49.197887   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:24:49.197948   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:24:49.198036   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:24:49.468998   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:24:49.613979   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:24:49.868901   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:24:50.071143   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:24:50.093179   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:24:50.094030   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:24:50.094086   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:24:50.232993   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:24:50.234761   65164 out.go:235]   - Booting up control plane ...
	I0920 22:24:50.234871   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:24:50.247358   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:24:50.248714   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:24:50.249690   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:24:50.252874   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:24:49.778284   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:52.277872   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:54.278087   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:56.278439   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:58.777573   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:00.777673   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:02.777879   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.503130   64930 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.143868713s)
	I0920 22:25:07.503224   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:07.519558   64930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:25:07.530384   64930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:25:07.540441   64930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:25:07.540469   64930 kubeadm.go:157] found existing configuration files:
	
	I0920 22:25:07.540521   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 22:25:07.550627   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:25:07.550694   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:25:07.560193   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 22:25:07.569908   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:25:07.569966   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:25:07.580326   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 22:25:07.592893   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:25:07.592969   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:25:07.603183   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 22:25:07.613494   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:25:07.613558   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:25:07.623271   64930 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:25:07.671307   64930 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:25:07.671611   64930 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:25:07.801578   64930 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:25:07.801721   64930 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:25:07.801867   64930 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:25:07.811576   64930 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:25:04.778293   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.278844   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.814524   64930 out.go:235]   - Generating certificates and keys ...
	I0920 22:25:07.814630   64930 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:25:07.814714   64930 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:25:07.814828   64930 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:25:07.814911   64930 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:25:07.815048   64930 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:25:07.815155   64930 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:25:07.815261   64930 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:25:07.815374   64930 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:25:07.815473   64930 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:25:07.815580   64930 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:25:07.815640   64930 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:25:07.815728   64930 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:25:08.129976   64930 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:25:08.305286   64930 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:25:08.489838   64930 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:25:08.656531   64930 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:25:08.909653   64930 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:25:08.910197   64930 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:25:08.913016   64930 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:25:08.914949   64930 out.go:235]   - Booting up control plane ...
	I0920 22:25:08.915080   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:25:08.915160   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:25:08.915571   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:25:08.935196   64930 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:25:08.941127   64930 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:25:08.941188   64930 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:25:09.077238   64930 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:25:09.077420   64930 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:25:09.580067   64930 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.879816ms
	I0920 22:25:09.580155   64930 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:25:09.779112   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:11.779164   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:14.582062   64930 kubeadm.go:310] [api-check] The API server is healthy after 5.001748542s
	I0920 22:25:14.600094   64930 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:25:14.622724   64930 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:25:14.657734   64930 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:25:14.657985   64930 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-799783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:25:14.673021   64930 kubeadm.go:310] [bootstrap-token] Using token: xygl58.yjxbp1o582mz9yse
	I0920 22:25:14.675015   64930 out.go:235]   - Configuring RBAC rules ...
	I0920 22:25:14.675149   64930 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:25:14.682527   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:25:14.696372   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:25:14.700087   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:25:14.703910   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:25:14.707739   64930 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:25:14.991815   64930 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:25:15.451193   64930 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:25:15.990452   64930 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:25:15.990476   64930 kubeadm.go:310] 
	I0920 22:25:15.990551   64930 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:25:15.990561   64930 kubeadm.go:310] 
	I0920 22:25:15.990692   64930 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:25:15.990720   64930 kubeadm.go:310] 
	I0920 22:25:15.990765   64930 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:25:15.990873   64930 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:25:15.990983   64930 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:25:15.990998   64930 kubeadm.go:310] 
	I0920 22:25:15.991082   64930 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:25:15.991091   64930 kubeadm.go:310] 
	I0920 22:25:15.991157   64930 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:25:15.991169   64930 kubeadm.go:310] 
	I0920 22:25:15.991248   64930 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:25:15.991352   64930 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:25:15.991458   64930 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:25:15.991469   64930 kubeadm.go:310] 
	I0920 22:25:15.991587   64930 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:25:15.991695   64930 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:25:15.991705   64930 kubeadm.go:310] 
	I0920 22:25:15.991828   64930 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token xygl58.yjxbp1o582mz9yse \
	I0920 22:25:15.991969   64930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 22:25:15.991996   64930 kubeadm.go:310] 	--control-plane 
	I0920 22:25:15.992004   64930 kubeadm.go:310] 
	I0920 22:25:15.992116   64930 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:25:15.992128   64930 kubeadm.go:310] 
	I0920 22:25:15.992237   64930 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token xygl58.yjxbp1o582mz9yse \
	I0920 22:25:15.992374   64930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 22:25:15.993135   64930 kubeadm.go:310] W0920 22:25:07.638229    2519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:25:15.993525   64930 kubeadm.go:310] W0920 22:25:07.639053    2519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:25:15.993662   64930 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:25:15.993697   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:25:15.993707   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:25:15.995878   64930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:25:13.779628   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:16.279390   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:15.997224   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:25:16.009209   64930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:25:16.056563   64930 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:25:16.056636   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:16.056650   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-799783 minikube.k8s.io/updated_at=2024_09_20T22_25_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=default-k8s-diff-port-799783 minikube.k8s.io/primary=true
	I0920 22:25:16.306668   64930 ops.go:34] apiserver oom_adj: -16
	I0920 22:25:16.306700   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:16.807170   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:17.306777   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:17.807844   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:18.307336   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:18.807397   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:19.307726   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:19.807086   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:20.306789   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:20.440925   64930 kubeadm.go:1113] duration metric: took 4.384353815s to wait for elevateKubeSystemPrivileges
	I0920 22:25:20.440962   64930 kubeadm.go:394] duration metric: took 4m59.125193253s to StartCluster
	I0920 22:25:20.440983   64930 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:25:20.441088   64930 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:25:20.442725   64930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:25:20.443037   64930 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:25:20.443113   64930 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:25:20.443191   64930 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443209   64930 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-799783"
	W0920 22:25:20.443217   64930 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:25:20.443243   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.443268   64930 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:25:20.443324   64930 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443344   64930 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-799783"
	I0920 22:25:20.443654   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.443690   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.443738   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.443746   64930 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443766   64930 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-799783"
	I0920 22:25:20.443776   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0920 22:25:20.443784   64930 addons.go:243] addon metrics-server should already be in state true
	I0920 22:25:20.443814   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.444185   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.444234   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.445296   64930 out.go:177] * Verifying Kubernetes components...
	I0920 22:25:20.447251   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:25:20.460387   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36667
	I0920 22:25:20.460405   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40197
	I0920 22:25:20.460831   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.460845   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.461346   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.461369   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.461486   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.461511   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.461774   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.461877   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.462078   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.462361   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.462402   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.463069   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0920 22:25:20.463410   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.463880   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.463900   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.465938   64930 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-799783"
	W0920 22:25:20.465969   64930 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:25:20.465998   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.466384   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.466422   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.467044   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.467706   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.467742   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.482478   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I0920 22:25:20.483079   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.483326   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0920 22:25:20.483597   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.483621   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.483758   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.483921   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.484249   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.484269   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.484562   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.484633   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.484683   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.484899   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.487182   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.489324   64930 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:25:20.489534   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38485
	I0920 22:25:20.489908   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.490596   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.490619   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.490892   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.491065   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.491195   64930 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:25:20.491216   64930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:25:20.491235   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.493798   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.494931   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.495325   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.495346   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.495482   64930 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:25:20.495649   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.495816   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.495967   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.496091   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.496876   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:25:20.496899   64930 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:25:20.496919   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.500143   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.500691   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.500715   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.500900   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.501086   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.501243   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.501378   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.502515   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34479
	I0920 22:25:20.502834   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.503415   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.503435   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.503858   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.504115   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.506006   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.506217   64930 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:25:20.506234   64930 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:25:20.506253   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.509565   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.510128   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.510175   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.510373   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.510558   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.510702   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.510837   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.644780   64930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:25:20.662529   64930 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-799783" to be "Ready" ...
	I0920 22:25:20.717900   64930 node_ready.go:49] node "default-k8s-diff-port-799783" has status "Ready":"True"
	I0920 22:25:20.717930   64930 node_ready.go:38] duration metric: took 55.371957ms for node "default-k8s-diff-port-799783" to be "Ready" ...
	I0920 22:25:20.717959   64930 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:20.751849   64930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:20.793517   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:25:20.827184   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:25:20.827210   64930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:25:20.830914   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:25:20.882548   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:25:20.882577   64930 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:25:20.942208   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:25:20.942235   64930 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:25:20.998935   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:25:21.567298   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567313   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567333   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567338   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567602   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.567648   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.567677   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.567694   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567705   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567921   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.567947   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.567952   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569272   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.569294   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569344   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.569361   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.569606   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.569641   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569641   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.589899   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.589919   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.590252   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.590267   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.590273   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.842284   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.842313   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.842673   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.842674   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.842703   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.842716   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.842726   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.843052   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.843086   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.843100   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.843578   64930 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-799783"
	I0920 22:25:21.847202   64930 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 22:25:18.778824   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:20.779763   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:23.279489   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:21.849188   64930 addons.go:510] duration metric: took 1.406076182s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 22:25:22.761019   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:23.758480   64930 pod_ready.go:93] pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:23.758502   64930 pod_ready.go:82] duration metric: took 3.006618308s for pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:23.758514   64930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:25.776826   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:27.777926   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:25.766257   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:28.265538   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:29.784175   64930 pod_ready.go:93] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.784202   64930 pod_ready.go:82] duration metric: took 6.02568215s for pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.784213   64930 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.790351   64930 pod_ready.go:93] pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.790375   64930 pod_ready.go:82] duration metric: took 6.155652ms for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.790383   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.796400   64930 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.796423   64930 pod_ready.go:82] duration metric: took 6.034163ms for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.796432   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.801470   64930 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.801495   64930 pod_ready.go:82] duration metric: took 5.056014ms for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.801504   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h7j48" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.806664   64930 pod_ready.go:93] pod "kube-proxy-h7j48" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.806683   64930 pod_ready.go:82] duration metric: took 5.17339ms for pod "kube-proxy-h7j48" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.806691   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:30.163945   64930 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:30.163973   64930 pod_ready.go:82] duration metric: took 357.276064ms for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:30.163981   64930 pod_ready.go:39] duration metric: took 9.446007204s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:30.163995   64930 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:25:30.164067   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:25:30.180030   64930 api_server.go:72] duration metric: took 9.736951732s to wait for apiserver process to appear ...
	I0920 22:25:30.180063   64930 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:25:30.180084   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:25:30.184582   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 200:
	ok
	I0920 22:25:30.185462   64930 api_server.go:141] control plane version: v1.31.1
	I0920 22:25:30.185482   64930 api_server.go:131] duration metric: took 5.41308ms to wait for apiserver health ...
	I0920 22:25:30.185490   64930 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:25:30.254069   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:25:30.255250   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:30.255483   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:30.366383   64930 system_pods.go:59] 9 kube-system pods found
	I0920 22:25:30.366415   64930 system_pods.go:61] "coredns-7c65d6cfc9-dpwmm" [6bd8b8dc-b176-4d2a-affc-63e02eb9fba3] Running
	I0920 22:25:30.366421   64930 system_pods.go:61] "coredns-7c65d6cfc9-lgvzf" [58852f70-36e8-4972-8a6f-b04d8fc9d9f3] Running
	I0920 22:25:30.366425   64930 system_pods.go:61] "etcd-default-k8s-diff-port-799783" [50316bfb-5f45-4685-a331-896f9ecbf691] Running
	I0920 22:25:30.366429   64930 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799783" [f5148101-179c-45e3-a7fc-b4b8a95c8b13] Running
	I0920 22:25:30.366433   64930 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799783" [c4c595bb-40bd-4ad6-b121-43f539fc9d76] Running
	I0920 22:25:30.366436   64930 system_pods.go:61] "kube-proxy-h7j48" [0fe6b074-a959-4f74-a981-e2303126c3ab] Running
	I0920 22:25:30.366439   64930 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799783" [369b18a4-6c12-403a-889d-4365264a29bf] Running
	I0920 22:25:30.366444   64930 system_pods.go:61] "metrics-server-6867b74b74-pjcvw" [0b1b708c-664e-4d6c-8da2-79941793d8d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:25:30.366448   64930 system_pods.go:61] "storage-provisioner" [6149617c-dd2c-4ac4-ad1b-80f4943d966f] Running
	I0920 22:25:30.366455   64930 system_pods.go:74] duration metric: took 180.959929ms to wait for pod list to return data ...
	I0920 22:25:30.366462   64930 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:25:30.563810   64930 default_sa.go:45] found service account: "default"
	I0920 22:25:30.563838   64930 default_sa.go:55] duration metric: took 197.369932ms for default service account to be created ...
	I0920 22:25:30.563848   64930 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:25:30.767257   64930 system_pods.go:86] 9 kube-system pods found
	I0920 22:25:30.767285   64930 system_pods.go:89] "coredns-7c65d6cfc9-dpwmm" [6bd8b8dc-b176-4d2a-affc-63e02eb9fba3] Running
	I0920 22:25:30.767291   64930 system_pods.go:89] "coredns-7c65d6cfc9-lgvzf" [58852f70-36e8-4972-8a6f-b04d8fc9d9f3] Running
	I0920 22:25:30.767295   64930 system_pods.go:89] "etcd-default-k8s-diff-port-799783" [50316bfb-5f45-4685-a331-896f9ecbf691] Running
	I0920 22:25:30.767309   64930 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-799783" [f5148101-179c-45e3-a7fc-b4b8a95c8b13] Running
	I0920 22:25:30.767313   64930 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-799783" [c4c595bb-40bd-4ad6-b121-43f539fc9d76] Running
	I0920 22:25:30.767318   64930 system_pods.go:89] "kube-proxy-h7j48" [0fe6b074-a959-4f74-a981-e2303126c3ab] Running
	I0920 22:25:30.767323   64930 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-799783" [369b18a4-6c12-403a-889d-4365264a29bf] Running
	I0920 22:25:30.767331   64930 system_pods.go:89] "metrics-server-6867b74b74-pjcvw" [0b1b708c-664e-4d6c-8da2-79941793d8d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:25:30.767337   64930 system_pods.go:89] "storage-provisioner" [6149617c-dd2c-4ac4-ad1b-80f4943d966f] Running
	I0920 22:25:30.767348   64930 system_pods.go:126] duration metric: took 203.493638ms to wait for k8s-apps to be running ...
	I0920 22:25:30.767362   64930 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:25:30.767405   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:30.784553   64930 system_svc.go:56] duration metric: took 17.183184ms WaitForService to wait for kubelet
	I0920 22:25:30.784583   64930 kubeadm.go:582] duration metric: took 10.341510649s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:25:30.784612   64930 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:25:30.963140   64930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:25:30.963166   64930 node_conditions.go:123] node cpu capacity is 2
	I0920 22:25:30.963177   64930 node_conditions.go:105] duration metric: took 178.56013ms to run NodePressure ...
	I0920 22:25:30.963187   64930 start.go:241] waiting for startup goroutines ...
	I0920 22:25:30.963194   64930 start.go:246] waiting for cluster config update ...
	I0920 22:25:30.963204   64930 start.go:255] writing updated cluster config ...
	I0920 22:25:30.963504   64930 ssh_runner.go:195] Run: rm -f paused
	I0920 22:25:31.013117   64930 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:25:31.015183   64930 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-799783" cluster and "default" namespace by default
	I0920 22:25:29.779849   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:31.772018   64076 pod_ready.go:82] duration metric: took 4m0.000779939s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" ...
	E0920 22:25:31.772048   64076 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 22:25:31.772065   64076 pod_ready.go:39] duration metric: took 4m8.986742832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:31.772089   64076 kubeadm.go:597] duration metric: took 4m16.808660952s to restartPrimaryControlPlane
	W0920 22:25:31.772139   64076 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:25:31.772162   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:25:35.255954   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:35.256195   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:45.256746   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:45.257006   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:58.118910   64076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.346718333s)
	I0920 22:25:58.118998   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:58.138485   64076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:25:58.150312   64076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:25:58.163089   64076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:25:58.163114   64076 kubeadm.go:157] found existing configuration files:
	
	I0920 22:25:58.163165   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:25:58.173792   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:25:58.173847   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:25:58.185186   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:25:58.195835   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:25:58.195893   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:25:58.206121   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:25:58.216016   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:25:58.216076   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:25:58.226242   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:25:58.235841   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:25:58.235907   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:25:58.245684   64076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:25:58.295781   64076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:25:58.295913   64076 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:25:58.411688   64076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:25:58.411802   64076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:25:58.411912   64076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:25:58.421924   64076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:25:58.424166   64076 out.go:235]   - Generating certificates and keys ...
	I0920 22:25:58.424264   64076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:25:58.424355   64076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:25:58.424474   64076 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:25:58.424562   64076 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:25:58.424667   64076 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:25:58.424735   64076 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:25:58.424804   64076 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:25:58.424874   64076 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:25:58.424955   64076 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:25:58.425072   64076 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:25:58.425145   64076 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:25:58.425227   64076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:25:58.541762   64076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:25:58.784016   64076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:25:58.942020   64076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:25:59.024750   64076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:25:59.348415   64076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:25:59.348852   64076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:25:59.351325   64076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:25:59.353421   64076 out.go:235]   - Booting up control plane ...
	I0920 22:25:59.353551   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:25:59.353681   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:25:59.354049   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:25:59.373482   64076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:25:59.383527   64076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:25:59.383614   64076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:25:59.518812   64076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:25:59.519013   64076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:26:00.520303   64076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001563044s
	I0920 22:26:00.520408   64076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:26:05.522834   64076 kubeadm.go:310] [api-check] The API server is healthy after 5.002467309s
	I0920 22:26:05.536106   64076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:26:05.552715   64076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:26:05.578398   64076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:26:05.578647   64076 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-032884 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:26:05.590816   64076 kubeadm.go:310] [bootstrap-token] Using token: 3j77z9.7tnukhkmyvtf2f59
	I0920 22:26:05.258121   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:05.258367   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:05.592300   64076 out.go:235]   - Configuring RBAC rules ...
	I0920 22:26:05.592451   64076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:26:05.600458   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:26:05.608494   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:26:05.612079   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:26:05.617101   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:26:05.627400   64076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:26:05.930323   64076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:26:06.388375   64076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:26:06.934433   64076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:26:06.934459   64076 kubeadm.go:310] 
	I0920 22:26:06.934541   64076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:26:06.934552   64076 kubeadm.go:310] 
	I0920 22:26:06.934644   64076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:26:06.934655   64076 kubeadm.go:310] 
	I0920 22:26:06.934694   64076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:26:06.934824   64076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:26:06.934916   64076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:26:06.934950   64076 kubeadm.go:310] 
	I0920 22:26:06.935050   64076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:26:06.935067   64076 kubeadm.go:310] 
	I0920 22:26:06.935125   64076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:26:06.935136   64076 kubeadm.go:310] 
	I0920 22:26:06.935222   64076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:26:06.935325   64076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:26:06.935429   64076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:26:06.935438   64076 kubeadm.go:310] 
	I0920 22:26:06.935567   64076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:26:06.935719   64076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:26:06.935734   64076 kubeadm.go:310] 
	I0920 22:26:06.935841   64076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3j77z9.7tnukhkmyvtf2f59 \
	I0920 22:26:06.935978   64076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 22:26:06.936005   64076 kubeadm.go:310] 	--control-plane 
	I0920 22:26:06.936013   64076 kubeadm.go:310] 
	I0920 22:26:06.936116   64076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:26:06.936130   64076 kubeadm.go:310] 
	I0920 22:26:06.936241   64076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3j77z9.7tnukhkmyvtf2f59 \
	I0920 22:26:06.936376   64076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 22:26:06.937407   64076 kubeadm.go:310] W0920 22:25:58.249567    2984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:26:06.937837   64076 kubeadm.go:310] W0920 22:25:58.250340    2984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:26:06.937982   64076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:26:06.938009   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:26:06.938021   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:26:06.939944   64076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:26:06.941272   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:26:06.952238   64076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:26:06.973391   64076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:26:06.973493   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:06.973530   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-032884 minikube.k8s.io/updated_at=2024_09_20T22_26_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=no-preload-032884 minikube.k8s.io/primary=true
	I0920 22:26:07.020571   64076 ops.go:34] apiserver oom_adj: -16
	I0920 22:26:07.194238   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:07.694543   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:08.194516   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:08.694927   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:09.195240   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:09.694399   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.194851   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.695301   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.816261   64076 kubeadm.go:1113] duration metric: took 3.842834289s to wait for elevateKubeSystemPrivileges
	I0920 22:26:10.816307   64076 kubeadm.go:394] duration metric: took 4m55.906657261s to StartCluster
	I0920 22:26:10.816334   64076 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:26:10.816423   64076 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:26:10.818137   64076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:26:10.818430   64076 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:26:10.818544   64076 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:26:10.818635   64076 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:26:10.818650   64076 addons.go:69] Setting storage-provisioner=true in profile "no-preload-032884"
	I0920 22:26:10.818673   64076 addons.go:234] Setting addon storage-provisioner=true in "no-preload-032884"
	I0920 22:26:10.818683   64076 addons.go:69] Setting default-storageclass=true in profile "no-preload-032884"
	W0920 22:26:10.818695   64076 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:26:10.818695   64076 addons.go:69] Setting metrics-server=true in profile "no-preload-032884"
	I0920 22:26:10.818709   64076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-032884"
	I0920 22:26:10.818731   64076 addons.go:234] Setting addon metrics-server=true in "no-preload-032884"
	I0920 22:26:10.818739   64076 host.go:66] Checking if "no-preload-032884" exists ...
	W0920 22:26:10.818744   64076 addons.go:243] addon metrics-server should already be in state true
	I0920 22:26:10.818769   64076 host.go:66] Checking if "no-preload-032884" exists ...
	I0920 22:26:10.819164   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819180   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819179   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819213   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.819222   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.819331   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.820951   64076 out.go:177] * Verifying Kubernetes components...
	I0920 22:26:10.822713   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:26:10.836801   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0920 22:26:10.837338   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.837962   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.837987   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.838418   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.839070   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.839116   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.840425   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0920 22:26:10.840549   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34443
	I0920 22:26:10.840871   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.840930   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.841366   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.841392   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.841486   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.841510   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.841775   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.841840   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.841983   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.842400   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.842439   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.845507   64076 addons.go:234] Setting addon default-storageclass=true in "no-preload-032884"
	W0920 22:26:10.845528   64076 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:26:10.845554   64076 host.go:66] Checking if "no-preload-032884" exists ...
	I0920 22:26:10.845888   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.845930   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.856667   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I0920 22:26:10.857183   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.857701   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.857717   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.858050   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.858233   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.858557   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38281
	I0920 22:26:10.858951   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.859570   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.859590   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.859931   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.860098   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.860214   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.861837   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.862616   64076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:26:10.863540   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0920 22:26:10.863601   64076 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:26:10.863918   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.864542   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.864562   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.864714   64076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:26:10.864732   64076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:26:10.864750   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.864852   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.865319   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.865348   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.865572   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:26:10.865590   64076 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:26:10.865611   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.868063   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868471   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.868493   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868500   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868687   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.868968   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.868990   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.869012   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.869160   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.869216   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.869298   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:10.869356   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.869480   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.869584   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:10.908239   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0920 22:26:10.910305   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.910864   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.910889   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.911286   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.911498   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.913134   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.913353   64076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:26:10.913369   64076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:26:10.913385   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.916151   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.916720   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.916751   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.916974   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.917122   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.917235   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.917322   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:11.062110   64076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:26:11.091616   64076 node_ready.go:35] waiting up to 6m0s for node "no-preload-032884" to be "Ready" ...
	I0920 22:26:11.109422   64076 node_ready.go:49] node "no-preload-032884" has status "Ready":"True"
	I0920 22:26:11.109446   64076 node_ready.go:38] duration metric: took 17.79737ms for node "no-preload-032884" to be "Ready" ...
	I0920 22:26:11.109455   64076 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:26:11.117996   64076 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:11.162435   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:26:11.251093   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:26:11.251120   64076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:26:11.267736   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:26:11.300802   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:26:11.300828   64076 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:26:11.417756   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:26:11.417784   64076 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:26:11.467144   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:26:11.490750   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.490776   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.491084   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.491093   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:11.491103   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.491127   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.491142   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.492893   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:11.492899   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.492913   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.526730   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.526758   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.527068   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.527113   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.527142   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.029394   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.029419   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.029728   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.029734   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.029753   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.029780   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.029788   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.030049   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.030060   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.725440   64076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.258247375s)
	I0920 22:26:12.725497   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.725509   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.725835   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.725896   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.725909   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.725929   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.725937   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.726239   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.726256   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.726268   64076 addons.go:475] Verifying addon metrics-server=true in "no-preload-032884"
	I0920 22:26:12.728264   64076 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 22:26:12.729421   64076 addons.go:510] duration metric: took 1.910882335s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 22:26:13.131514   64076 pod_ready.go:103] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:15.624357   64076 pod_ready.go:103] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:16.624382   64076 pod_ready.go:93] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:16.624410   64076 pod_ready.go:82] duration metric: took 5.506388589s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:16.624419   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:18.631927   64076 pod_ready.go:103] pod "kube-apiserver-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:19.132302   64076 pod_ready.go:93] pod "kube-apiserver-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.132326   64076 pod_ready.go:82] duration metric: took 2.507900074s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.132336   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.137816   64076 pod_ready.go:93] pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.137841   64076 pod_ready.go:82] duration metric: took 5.497911ms for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.137853   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fdc7z" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.143195   64076 pod_ready.go:93] pod "kube-proxy-fdc7z" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.143225   64076 pod_ready.go:82] duration metric: took 5.363183ms for pod "kube-proxy-fdc7z" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.143236   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.148171   64076 pod_ready.go:93] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.148193   64076 pod_ready.go:82] duration metric: took 4.948914ms for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.148200   64076 pod_ready.go:39] duration metric: took 8.038734304s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:26:19.148214   64076 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:26:19.148259   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:26:19.167764   64076 api_server.go:72] duration metric: took 8.349299959s to wait for apiserver process to appear ...
	I0920 22:26:19.167789   64076 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:26:19.167809   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:26:19.173104   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 200:
	ok
	I0920 22:26:19.174150   64076 api_server.go:141] control plane version: v1.31.1
	I0920 22:26:19.174171   64076 api_server.go:131] duration metric: took 6.376066ms to wait for apiserver health ...
	I0920 22:26:19.174178   64076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:26:19.181050   64076 system_pods.go:59] 9 kube-system pods found
	I0920 22:26:19.181083   64076 system_pods.go:61] "coredns-7c65d6cfc9-fbxtz" [30e7b041-7437-4dec-bea2-4512c1c233d8] Running
	I0920 22:26:19.181091   64076 system_pods.go:61] "coredns-7c65d6cfc9-jsx2x" [6b4bdbac-ba8e-434b-80b7-6ade0c7cc363] Running
	I0920 22:26:19.181097   64076 system_pods.go:61] "etcd-no-preload-032884" [dd023e30-352b-4578-85b8-27fed6b64110] Running
	I0920 22:26:19.181102   64076 system_pods.go:61] "kube-apiserver-no-preload-032884" [5a644708-3886-4534-b936-b4931e804a04] Running
	I0920 22:26:19.181106   64076 system_pods.go:61] "kube-controller-manager-no-preload-032884" [6c871a06-842c-4588-a16f-2ceb8eef6565] Running
	I0920 22:26:19.181113   64076 system_pods.go:61] "kube-proxy-fdc7z" [7a28610e-aa20-4aed-a9a4-1e125f03445b] Running
	I0920 22:26:19.181117   64076 system_pods.go:61] "kube-scheduler-no-preload-032884" [6d878841-99c2-4b0a-95fa-01fbc185e78c] Running
	I0920 22:26:19.181126   64076 system_pods.go:61] "metrics-server-6867b74b74-xvbg9" [33c2a642-1336-4e1c-98cf-a1941710c39f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:26:19.181131   64076 system_pods.go:61] "storage-provisioner" [18366542-5571-4ff9-885b-f3aaa3106905] Running
	I0920 22:26:19.181143   64076 system_pods.go:74] duration metric: took 6.958801ms to wait for pod list to return data ...
	I0920 22:26:19.181152   64076 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:26:19.185152   64076 default_sa.go:45] found service account: "default"
	I0920 22:26:19.185178   64076 default_sa.go:55] duration metric: took 4.016584ms for default service account to be created ...
	I0920 22:26:19.185188   64076 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:26:19.333687   64076 system_pods.go:86] 9 kube-system pods found
	I0920 22:26:19.333721   64076 system_pods.go:89] "coredns-7c65d6cfc9-fbxtz" [30e7b041-7437-4dec-bea2-4512c1c233d8] Running
	I0920 22:26:19.333729   64076 system_pods.go:89] "coredns-7c65d6cfc9-jsx2x" [6b4bdbac-ba8e-434b-80b7-6ade0c7cc363] Running
	I0920 22:26:19.333734   64076 system_pods.go:89] "etcd-no-preload-032884" [dd023e30-352b-4578-85b8-27fed6b64110] Running
	I0920 22:26:19.333740   64076 system_pods.go:89] "kube-apiserver-no-preload-032884" [5a644708-3886-4534-b936-b4931e804a04] Running
	I0920 22:26:19.333748   64076 system_pods.go:89] "kube-controller-manager-no-preload-032884" [6c871a06-842c-4588-a16f-2ceb8eef6565] Running
	I0920 22:26:19.333753   64076 system_pods.go:89] "kube-proxy-fdc7z" [7a28610e-aa20-4aed-a9a4-1e125f03445b] Running
	I0920 22:26:19.333759   64076 system_pods.go:89] "kube-scheduler-no-preload-032884" [6d878841-99c2-4b0a-95fa-01fbc185e78c] Running
	I0920 22:26:19.333768   64076 system_pods.go:89] "metrics-server-6867b74b74-xvbg9" [33c2a642-1336-4e1c-98cf-a1941710c39f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:26:19.333778   64076 system_pods.go:89] "storage-provisioner" [18366542-5571-4ff9-885b-f3aaa3106905] Running
	I0920 22:26:19.333792   64076 system_pods.go:126] duration metric: took 148.596279ms to wait for k8s-apps to be running ...
	I0920 22:26:19.333804   64076 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:26:19.333859   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:26:19.352068   64076 system_svc.go:56] duration metric: took 18.255575ms WaitForService to wait for kubelet
	I0920 22:26:19.352101   64076 kubeadm.go:582] duration metric: took 8.533641688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:26:19.352120   64076 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:26:19.529468   64076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:26:19.529499   64076 node_conditions.go:123] node cpu capacity is 2
	I0920 22:26:19.529510   64076 node_conditions.go:105] duration metric: took 177.384986ms to run NodePressure ...
	I0920 22:26:19.529521   64076 start.go:241] waiting for startup goroutines ...
	I0920 22:26:19.529527   64076 start.go:246] waiting for cluster config update ...
	I0920 22:26:19.529536   64076 start.go:255] writing updated cluster config ...
	I0920 22:26:19.529794   64076 ssh_runner.go:195] Run: rm -f paused
	I0920 22:26:19.578797   64076 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:26:19.581150   64076 out.go:177] * Done! kubectl is now configured to use "no-preload-032884" cluster and "default" namespace by default
	I0920 22:26:45.260713   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:45.260972   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:45.260987   65164 kubeadm.go:310] 
	I0920 22:26:45.261036   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:26:45.261079   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:26:45.261089   65164 kubeadm.go:310] 
	I0920 22:26:45.261139   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:26:45.261186   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:26:45.261335   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:26:45.261345   65164 kubeadm.go:310] 
	I0920 22:26:45.261491   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:26:45.261525   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:26:45.261554   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:26:45.261560   65164 kubeadm.go:310] 
	I0920 22:26:45.261648   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:26:45.261717   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:26:45.261723   65164 kubeadm.go:310] 
	I0920 22:26:45.261812   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:26:45.261917   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:26:45.262021   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:26:45.262113   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:26:45.262126   65164 kubeadm.go:310] 
	I0920 22:26:45.263170   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:26:45.263288   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:26:45.263381   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 22:26:45.263497   65164 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 22:26:45.263544   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:26:50.654066   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.390496163s)
	I0920 22:26:50.654143   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:26:50.670366   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:26:50.682320   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:26:50.682340   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:26:50.682395   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:26:50.692427   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:26:50.692501   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:26:50.703008   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:26:50.713027   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:26:50.713090   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:26:50.724408   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.733746   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:26:50.733805   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.744669   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:26:50.753996   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:26:50.754061   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:26:50.764484   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:26:50.836541   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:26:50.836606   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:26:50.979844   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:26:50.979987   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:26:50.980117   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:26:51.172382   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:26:51.174186   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:26:51.174313   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:26:51.174410   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:26:51.174550   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:26:51.174640   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:26:51.174768   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:26:51.174855   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:26:51.175252   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:26:51.175637   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:26:51.176082   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:26:51.176400   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:26:51.176484   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:26:51.176562   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:26:51.391581   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:26:51.474902   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:26:51.620828   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:26:51.838880   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:26:51.856375   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:26:51.857522   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:26:51.857615   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:26:52.014694   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:26:52.016872   65164 out.go:235]   - Booting up control plane ...
	I0920 22:26:52.017002   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:26:52.019915   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:26:52.021285   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:26:52.022532   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:26:52.026230   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:27:32.029658   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:27:32.029764   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:32.030051   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:37.030416   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:37.030634   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:47.031397   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:47.031644   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:07.031668   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:07.031918   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030492   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:47.030797   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030821   65164 kubeadm.go:310] 
	I0920 22:28:47.030874   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:28:47.030912   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:28:47.030919   65164 kubeadm.go:310] 
	I0920 22:28:47.030982   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:28:47.031046   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:28:47.031204   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:28:47.031225   65164 kubeadm.go:310] 
	I0920 22:28:47.031371   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:28:47.031425   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:28:47.031462   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:28:47.031466   65164 kubeadm.go:310] 
	I0920 22:28:47.031605   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:28:47.031727   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:28:47.031739   65164 kubeadm.go:310] 
	I0920 22:28:47.031880   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:28:47.032003   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:28:47.032102   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:28:47.032202   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:28:47.032212   65164 kubeadm.go:310] 
	I0920 22:28:47.033045   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:28:47.033157   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:28:47.033247   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 22:28:47.033315   65164 kubeadm.go:394] duration metric: took 8m3.299765846s to StartCluster
	I0920 22:28:47.033360   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:28:47.033452   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:28:47.081201   65164 cri.go:89] found id: ""
	I0920 22:28:47.081225   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.081233   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:28:47.081239   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:28:47.081289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:28:47.119048   65164 cri.go:89] found id: ""
	I0920 22:28:47.119081   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.119089   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:28:47.119094   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:28:47.119140   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:28:47.164690   65164 cri.go:89] found id: ""
	I0920 22:28:47.164719   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.164730   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:28:47.164738   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:28:47.164806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:28:47.201003   65164 cri.go:89] found id: ""
	I0920 22:28:47.201035   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.201046   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:28:47.201053   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:28:47.201114   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:28:47.238478   65164 cri.go:89] found id: ""
	I0920 22:28:47.238510   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.238522   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:28:47.238530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:28:47.238579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:28:47.274372   65164 cri.go:89] found id: ""
	I0920 22:28:47.274406   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.274415   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:28:47.274422   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:28:47.274487   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:28:47.310155   65164 cri.go:89] found id: ""
	I0920 22:28:47.310181   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.310191   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:28:47.310197   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:28:47.310243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:28:47.350845   65164 cri.go:89] found id: ""
	I0920 22:28:47.350872   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.350882   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:28:47.350894   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:28:47.350908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:28:47.403879   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:28:47.403915   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:28:47.418989   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:28:47.419021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:28:47.511632   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:28:47.511697   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:28:47.511718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:28:47.635613   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:28:47.635653   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0920 22:28:47.677326   65164 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 22:28:47.677383   65164 out.go:270] * 
	W0920 22:28:47.677436   65164 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.677449   65164 out.go:270] * 
	W0920 22:28:47.678365   65164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 22:28:47.681590   65164 out.go:201] 
	W0920 22:28:47.682658   65164 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.682696   65164 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 22:28:47.682719   65164 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 22:28:47.684042   65164 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.604694527Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871329604673453,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=65bf58e8-b739-481e-8419-e104421b903d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.605297395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2d6c068-7982-4eb3-85f7-c6321ced820e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.605343382Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2d6c068-7982-4eb3-85f7-c6321ced820e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.605436371Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e2d6c068-7982-4eb3-85f7-c6321ced820e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.639637251Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f185400c-5f85-4929-9d59-86abaffd64c1 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.639763704Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f185400c-5f85-4929-9d59-86abaffd64c1 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.640831158Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2f57a73-4f49-45ee-92ce-4a8fa33de95b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.641344019Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871329641257457,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2f57a73-4f49-45ee-92ce-4a8fa33de95b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.641989758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4ac24eb3-db5e-4b55-a73b-a29f55f8211e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.642087984Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4ac24eb3-db5e-4b55-a73b-a29f55f8211e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.642119183Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=4ac24eb3-db5e-4b55-a73b-a29f55f8211e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.676572541Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba578493-d53c-478d-8d94-cc149b39a8ad name=/runtime.v1.RuntimeService/Version
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.676679151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba578493-d53c-478d-8d94-cc149b39a8ad name=/runtime.v1.RuntimeService/Version
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.677908511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6baaf8cf-b0f9-4936-910e-aec0db9a9aa0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.678254871Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871329678228441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6baaf8cf-b0f9-4936-910e-aec0db9a9aa0 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.678871853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d1e608c-7ff0-41ab-af38-bd720a47e515 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.678933606Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d1e608c-7ff0-41ab-af38-bd720a47e515 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.678973133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=5d1e608c-7ff0-41ab-af38-bd720a47e515 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.712869632Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc612f32-df98-46cc-bc40-5bcdd954fcf8 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.712982537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc612f32-df98-46cc-bc40-5bcdd954fcf8 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.714691914Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e6dc173e-b80c-47f6-a981-afbadba39d85 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.715077364Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871329715056320,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e6dc173e-b80c-47f6-a981-afbadba39d85 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.715693420Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e6bcb223-de36-4812-9557-648a10ba31f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.715753511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e6bcb223-de36-4812-9557-648a10ba31f7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:28:49 old-k8s-version-593412 crio[635]: time="2024-09-20 22:28:49.715788787Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e6bcb223-de36-4812-9557-648a10ba31f7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep20 22:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053738] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040959] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.115225] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.678902] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.608738] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.088274] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.062764] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064033] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.206014] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.138069] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.284835] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +6.732414] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.070990] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.334399] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.408297] kauditd_printk_skb: 46 callbacks suppressed
	[Sep20 22:24] systemd-fstab-generator[5074]: Ignoring "noauto" option for root device
	[Sep20 22:26] systemd-fstab-generator[5359]: Ignoring "noauto" option for root device
	[  +0.072028] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:28:49 up 8 min,  0 users,  load average: 0.15, 0.14, 0.08
	Linux old-k8s-version-593412 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc00098f560)
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]: goroutine 156 [select]:
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00099def0, 0x4f0ac20, 0xc0009a9130, 0x1, 0xc0001000c0)
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002527e0, 0xc0001000c0)
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc00098d160, 0xc000b230c0)
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Sep 20 22:28:46 old-k8s-version-593412 kubelet[5537]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Sep 20 22:28:46 old-k8s-version-593412 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 20 22:28:46 old-k8s-version-593412 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 20 22:28:47 old-k8s-version-593412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Sep 20 22:28:47 old-k8s-version-593412 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 20 22:28:47 old-k8s-version-593412 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 20 22:28:47 old-k8s-version-593412 kubelet[5592]: I0920 22:28:47.597635    5592 server.go:416] Version: v1.20.0
	Sep 20 22:28:47 old-k8s-version-593412 kubelet[5592]: I0920 22:28:47.597981    5592 server.go:837] Client rotation is on, will bootstrap in background
	Sep 20 22:28:47 old-k8s-version-593412 kubelet[5592]: I0920 22:28:47.602030    5592 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 20 22:28:47 old-k8s-version-593412 kubelet[5592]: W0920 22:28:47.607506    5592 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 20 22:28:47 old-k8s-version-593412 kubelet[5592]: I0920 22:28:47.608307    5592 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-593412 -n old-k8s-version-593412
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-593412 -n old-k8s-version-593412: exit status 2 (234.768133ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-593412" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (705.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0920 22:25:09.131892   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-562750 -n embed-certs-562750
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-20 22:33:37.965437615 +0000 UTC m=+6358.096572595
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-562750 -n embed-certs-562750
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-562750 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-562750 logs -n 25: (2.165968809s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-335786 -- sudo                         | cert-options-335786          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-335786                                 | cert-options-335786          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-696728                           | kubernetes-upgrade-696728    | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:11 UTC |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-032884             | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-440582 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | disable-driver-mounts-440582                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:14 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-562750            | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799783  | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-032884                  | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-593412        | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-562750                 | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC | 20 Sep 24 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799783       | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:25 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-593412             | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC | 20 Sep 24 22:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 22:17:05
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 22:17:05.861648   65164 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:17:05.861774   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.861787   65164 out.go:358] Setting ErrFile to fd 2...
	I0920 22:17:05.861792   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.862021   65164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:17:05.862599   65164 out.go:352] Setting JSON to false
	I0920 22:17:05.863528   65164 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7174,"bootTime":1726863452,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:17:05.863630   65164 start.go:139] virtualization: kvm guest
	I0920 22:17:05.866077   65164 out.go:177] * [old-k8s-version-593412] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:17:05.867484   65164 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:17:05.867531   65164 notify.go:220] Checking for updates...
	I0920 22:17:05.869987   65164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:17:05.871483   65164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:17:05.872745   65164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:17:05.874062   65164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:17:05.875089   65164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:17:05.876578   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:17:05.876975   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.877047   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.891847   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I0920 22:17:05.892301   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.892821   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.892841   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.893134   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.893312   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.895266   65164 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 22:17:05.896322   65164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:17:05.896637   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.896671   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.912301   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37681
	I0920 22:17:05.912667   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.913170   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.913189   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.913475   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.913672   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.949827   65164 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 22:17:05.951258   65164 start.go:297] selected driver: kvm2
	I0920 22:17:05.951275   65164 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.951380   65164 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:17:05.952032   65164 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.952098   65164 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 22:17:05.967645   65164 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 22:17:05.968048   65164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:17:05.968078   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:17:05.968122   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:17:05.968161   65164 start.go:340] cluster config:
	{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.968267   65164 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.970080   65164 out.go:177] * Starting "old-k8s-version-593412" primary control-plane node in "old-k8s-version-593412" cluster
	I0920 22:17:03.331236   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:06.403275   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:05.971180   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:17:05.971218   65164 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 22:17:05.971227   65164 cache.go:56] Caching tarball of preloaded images
	I0920 22:17:05.971292   65164 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 22:17:05.971301   65164 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 22:17:05.971394   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:17:05.971564   65164 start.go:360] acquireMachinesLock for old-k8s-version-593412: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:17:12.483250   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:15.555283   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:21.635247   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:24.707272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:30.787265   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:33.859227   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:39.939281   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:43.011250   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:49.091264   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:52.163272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:58.243272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:01.315252   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:07.395256   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:10.467248   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:16.547257   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:19.619309   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:25.699278   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:28.771275   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:34.851234   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:37.923271   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:44.003246   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:47.075274   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:53.155254   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:56.227303   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:02.307307   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:05.379257   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:11.459278   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:14.531267   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:20.611255   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:23.683284   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:29.763270   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:32.835325   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:35.840483   64543 start.go:364] duration metric: took 3m48.294081485s to acquireMachinesLock for "embed-certs-562750"
	I0920 22:19:35.840533   64543 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:19:35.840542   64543 fix.go:54] fixHost starting: 
	I0920 22:19:35.840886   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:19:35.840934   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:19:35.856524   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I0920 22:19:35.856933   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:19:35.857405   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:19:35.857419   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:19:35.857754   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:19:35.857958   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:35.858095   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:19:35.859845   64543 fix.go:112] recreateIfNeeded on embed-certs-562750: state=Stopped err=<nil>
	I0920 22:19:35.859876   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	W0920 22:19:35.860023   64543 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:19:35.861973   64543 out.go:177] * Restarting existing kvm2 VM for "embed-certs-562750" ...
	I0920 22:19:35.863394   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Start
	I0920 22:19:35.863563   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring networks are active...
	I0920 22:19:35.864411   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring network default is active
	I0920 22:19:35.864746   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring network mk-embed-certs-562750 is active
	I0920 22:19:35.865085   64543 main.go:141] libmachine: (embed-certs-562750) Getting domain xml...
	I0920 22:19:35.865728   64543 main.go:141] libmachine: (embed-certs-562750) Creating domain...
	I0920 22:19:37.094096   64543 main.go:141] libmachine: (embed-certs-562750) Waiting to get IP...
	I0920 22:19:37.095056   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.095580   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.095671   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.095567   66127 retry.go:31] will retry after 248.155801ms: waiting for machine to come up
	I0920 22:19:37.345108   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.345577   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.345634   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.345559   66127 retry.go:31] will retry after 340.822818ms: waiting for machine to come up
	I0920 22:19:35.837792   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:19:35.837839   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:19:35.838152   64076 buildroot.go:166] provisioning hostname "no-preload-032884"
	I0920 22:19:35.838177   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:19:35.838385   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:19:35.840336   64076 machine.go:96] duration metric: took 4m37.421385706s to provisionDockerMachine
	I0920 22:19:35.840382   64076 fix.go:56] duration metric: took 4m37.44364588s for fixHost
	I0920 22:19:35.840392   64076 start.go:83] releasing machines lock for "no-preload-032884", held for 4m37.443690605s
	W0920 22:19:35.840414   64076 start.go:714] error starting host: provision: host is not running
	W0920 22:19:35.840512   64076 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 22:19:35.840521   64076 start.go:729] Will try again in 5 seconds ...
	I0920 22:19:37.688209   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.688587   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.688615   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.688534   66127 retry.go:31] will retry after 321.605412ms: waiting for machine to come up
	I0920 22:19:38.012304   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:38.012829   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:38.012853   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:38.012780   66127 retry.go:31] will retry after 475.245269ms: waiting for machine to come up
	I0920 22:19:38.489354   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:38.489733   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:38.489763   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:38.489687   66127 retry.go:31] will retry after 667.725945ms: waiting for machine to come up
	I0920 22:19:39.158494   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:39.158997   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:39.159024   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:39.158941   66127 retry.go:31] will retry after 582.98832ms: waiting for machine to come up
	I0920 22:19:39.743778   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:39.744147   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:39.744184   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:39.744092   66127 retry.go:31] will retry after 849.746424ms: waiting for machine to come up
	I0920 22:19:40.595098   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:40.595489   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:40.595518   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:40.595440   66127 retry.go:31] will retry after 1.349776305s: waiting for machine to come up
	I0920 22:19:41.947028   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:41.947449   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:41.947472   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:41.947422   66127 retry.go:31] will retry after 1.323416137s: waiting for machine to come up
	I0920 22:19:40.840969   64076 start.go:360] acquireMachinesLock for no-preload-032884: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:19:43.272963   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:43.273397   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:43.273424   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:43.273344   66127 retry.go:31] will retry after 1.986214907s: waiting for machine to come up
	I0920 22:19:45.262662   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:45.263148   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:45.263170   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:45.263095   66127 retry.go:31] will retry after 2.68512379s: waiting for machine to come up
	I0920 22:19:47.950652   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:47.951043   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:47.951069   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:47.950977   66127 retry.go:31] will retry after 3.616930237s: waiting for machine to come up
	I0920 22:19:51.569337   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:51.569741   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:51.569774   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:51.569692   66127 retry.go:31] will retry after 2.803702381s: waiting for machine to come up
	I0920 22:19:55.615900   64930 start.go:364] duration metric: took 3m10.214658006s to acquireMachinesLock for "default-k8s-diff-port-799783"
	I0920 22:19:55.615986   64930 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:19:55.615995   64930 fix.go:54] fixHost starting: 
	I0920 22:19:55.616427   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:19:55.616483   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:19:55.636346   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0920 22:19:55.636780   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:19:55.637258   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:19:55.637279   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:19:55.637578   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:19:55.637770   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:19:55.637911   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:19:55.639942   64930 fix.go:112] recreateIfNeeded on default-k8s-diff-port-799783: state=Stopped err=<nil>
	I0920 22:19:55.639973   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	W0920 22:19:55.640144   64930 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:19:55.642492   64930 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-799783" ...
	I0920 22:19:54.376796   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.377333   64543 main.go:141] libmachine: (embed-certs-562750) Found IP for machine: 192.168.50.193
	I0920 22:19:54.377353   64543 main.go:141] libmachine: (embed-certs-562750) Reserving static IP address...
	I0920 22:19:54.377363   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has current primary IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.377737   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "embed-certs-562750", mac: "52:54:00:be:17:82", ip: "192.168.50.193"} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.377768   64543 main.go:141] libmachine: (embed-certs-562750) DBG | skip adding static IP to network mk-embed-certs-562750 - found existing host DHCP lease matching {name: "embed-certs-562750", mac: "52:54:00:be:17:82", ip: "192.168.50.193"}
	I0920 22:19:54.377787   64543 main.go:141] libmachine: (embed-certs-562750) Reserved static IP address: 192.168.50.193
	I0920 22:19:54.377804   64543 main.go:141] libmachine: (embed-certs-562750) Waiting for SSH to be available...
	I0920 22:19:54.377849   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Getting to WaitForSSH function...
	I0920 22:19:54.379913   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.380197   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.380236   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.380325   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Using SSH client type: external
	I0920 22:19:54.380350   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa (-rw-------)
	I0920 22:19:54.380382   64543 main.go:141] libmachine: (embed-certs-562750) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:19:54.380395   64543 main.go:141] libmachine: (embed-certs-562750) DBG | About to run SSH command:
	I0920 22:19:54.380405   64543 main.go:141] libmachine: (embed-certs-562750) DBG | exit 0
	I0920 22:19:54.507262   64543 main.go:141] libmachine: (embed-certs-562750) DBG | SSH cmd err, output: <nil>: 
	I0920 22:19:54.507667   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetConfigRaw
	I0920 22:19:54.508332   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:54.510768   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.511108   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.511146   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.511441   64543 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/config.json ...
	I0920 22:19:54.511672   64543 machine.go:93] provisionDockerMachine start ...
	I0920 22:19:54.511690   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:54.511883   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.514116   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.514496   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.514524   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.514646   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.514794   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.514931   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.515053   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.515206   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.515457   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.515472   64543 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:19:54.623421   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:19:54.623455   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.623694   64543 buildroot.go:166] provisioning hostname "embed-certs-562750"
	I0920 22:19:54.623717   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.623843   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.626435   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.626733   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.626754   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.626924   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.627122   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.627293   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.627436   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.627594   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.627749   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.627760   64543 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-562750 && echo "embed-certs-562750" | sudo tee /etc/hostname
	I0920 22:19:54.750707   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-562750
	
	I0920 22:19:54.750729   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.753239   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.753512   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.753544   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.753764   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.753968   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.754153   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.754312   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.754481   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.754650   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.754666   64543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-562750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-562750/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-562750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:19:54.872899   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:19:54.872925   64543 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:19:54.872961   64543 buildroot.go:174] setting up certificates
	I0920 22:19:54.872971   64543 provision.go:84] configureAuth start
	I0920 22:19:54.872979   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.873237   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:54.875851   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.876280   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.876302   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.876471   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.878729   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.879117   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.879143   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.879320   64543 provision.go:143] copyHostCerts
	I0920 22:19:54.879370   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:19:54.879378   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:19:54.879467   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:19:54.879581   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:19:54.879591   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:19:54.879624   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:19:54.879694   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:19:54.879703   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:19:54.879736   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:19:54.879799   64543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.embed-certs-562750 san=[127.0.0.1 192.168.50.193 embed-certs-562750 localhost minikube]
	I0920 22:19:54.981669   64543 provision.go:177] copyRemoteCerts
	I0920 22:19:54.981737   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:19:54.981762   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.984500   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.984792   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.984814   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.984986   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.985187   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.985305   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.985424   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.069012   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 22:19:55.094417   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:19:55.119895   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:19:55.145399   64543 provision.go:87] duration metric: took 272.417168ms to configureAuth
	I0920 22:19:55.145433   64543 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:19:55.145627   64543 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:19:55.145719   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.148502   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.148866   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.148891   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.149082   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.149272   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.149399   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.149528   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.149637   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:55.149785   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:55.149796   64543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:19:55.374395   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:19:55.374422   64543 machine.go:96] duration metric: took 862.73711ms to provisionDockerMachine
	I0920 22:19:55.374436   64543 start.go:293] postStartSetup for "embed-certs-562750" (driver="kvm2")
	I0920 22:19:55.374448   64543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:19:55.374463   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.374798   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:19:55.374815   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.377262   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.377649   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.377671   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.377856   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.378054   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.378203   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.378327   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.462119   64543 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:19:55.466515   64543 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:19:55.466544   64543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:19:55.466621   64543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:19:55.466720   64543 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:19:55.466846   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:19:55.476703   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:19:55.501943   64543 start.go:296] duration metric: took 127.489738ms for postStartSetup
	I0920 22:19:55.501987   64543 fix.go:56] duration metric: took 19.661445548s for fixHost
	I0920 22:19:55.502009   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.505054   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.505394   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.505420   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.505606   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.505826   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.505969   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.506160   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.506352   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:55.506532   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:55.506541   64543 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:19:55.615752   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870795.575298210
	
	I0920 22:19:55.615773   64543 fix.go:216] guest clock: 1726870795.575298210
	I0920 22:19:55.615781   64543 fix.go:229] Guest: 2024-09-20 22:19:55.57529821 +0000 UTC Remote: 2024-09-20 22:19:55.501991982 +0000 UTC m=+248.100299917 (delta=73.306228ms)
	I0920 22:19:55.615805   64543 fix.go:200] guest clock delta is within tolerance: 73.306228ms
	I0920 22:19:55.615812   64543 start.go:83] releasing machines lock for "embed-certs-562750", held for 19.7753011s
	I0920 22:19:55.615850   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.616131   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:55.618697   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.619117   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.619144   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.619347   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.619859   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.620028   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.620110   64543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:19:55.620162   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.620261   64543 ssh_runner.go:195] Run: cat /version.json
	I0920 22:19:55.620287   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.622796   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623204   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623233   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.623254   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623440   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.623601   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.623660   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.623681   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623769   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.623880   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.623941   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.624046   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.624175   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.624291   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.731482   64543 ssh_runner.go:195] Run: systemctl --version
	I0920 22:19:55.738176   64543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:19:55.887545   64543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:19:55.894532   64543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:19:55.894610   64543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:19:55.911070   64543 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:19:55.911097   64543 start.go:495] detecting cgroup driver to use...
	I0920 22:19:55.911178   64543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:19:55.928676   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:19:55.943462   64543 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:19:55.943543   64543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:19:55.960910   64543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:19:55.978109   64543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:19:56.101434   64543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:19:56.238737   64543 docker.go:233] disabling docker service ...
	I0920 22:19:56.238814   64543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:19:56.254099   64543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:19:56.268926   64543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:19:56.417839   64543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:19:56.548769   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:19:56.563311   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:19:56.583073   64543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:19:56.583134   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.598212   64543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:19:56.598287   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.610151   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.623499   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.637734   64543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:19:56.649751   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.665619   64543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.689725   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.704698   64543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:19:56.716205   64543 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:19:56.716276   64543 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:19:56.734247   64543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:19:56.745725   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:19:56.879084   64543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:19:56.978332   64543 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:19:56.978400   64543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:19:56.985047   64543 start.go:563] Will wait 60s for crictl version
	I0920 22:19:56.985112   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:19:56.989231   64543 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:19:57.029037   64543 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:19:57.029126   64543 ssh_runner.go:195] Run: crio --version
	I0920 22:19:57.060838   64543 ssh_runner.go:195] Run: crio --version
	I0920 22:19:57.092838   64543 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:19:57.094132   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:57.097205   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:57.097597   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:57.097642   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:57.097865   64543 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 22:19:57.102433   64543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:19:57.116015   64543 kubeadm.go:883] updating cluster {Name:embed-certs-562750 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:19:57.116134   64543 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:19:57.116197   64543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:19:57.156317   64543 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:19:57.156380   64543 ssh_runner.go:195] Run: which lz4
	I0920 22:19:57.160502   64543 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:19:57.164982   64543 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:19:57.165024   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 22:19:55.643600   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Start
	I0920 22:19:55.643811   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring networks are active...
	I0920 22:19:55.644651   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring network default is active
	I0920 22:19:55.644941   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring network mk-default-k8s-diff-port-799783 is active
	I0920 22:19:55.645308   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Getting domain xml...
	I0920 22:19:55.645897   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Creating domain...
	I0920 22:19:56.961175   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting to get IP...
	I0920 22:19:56.961986   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:56.962442   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:56.962522   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:56.962429   66271 retry.go:31] will retry after 219.744487ms: waiting for machine to come up
	I0920 22:19:57.184180   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.184718   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.184751   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.184663   66271 retry.go:31] will retry after 331.58082ms: waiting for machine to come up
	I0920 22:19:57.518314   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.518907   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.518945   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.518878   66271 retry.go:31] will retry after 308.110588ms: waiting for machine to come up
	I0920 22:19:57.829213   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.829754   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.829782   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.829711   66271 retry.go:31] will retry after 421.186303ms: waiting for machine to come up
	I0920 22:19:58.252218   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.252662   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.252689   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:58.252618   66271 retry.go:31] will retry after 465.325985ms: waiting for machine to come up
	I0920 22:19:58.719232   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.719741   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.719769   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:58.719679   66271 retry.go:31] will retry after 865.976725ms: waiting for machine to come up
	I0920 22:19:59.587325   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:59.587694   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:59.587725   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:59.587654   66271 retry.go:31] will retry after 1.114189927s: waiting for machine to come up
	I0920 22:19:58.663210   64543 crio.go:462] duration metric: took 1.502738525s to copy over tarball
	I0920 22:19:58.663299   64543 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:00.798877   64543 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.135542137s)
	I0920 22:20:00.798912   64543 crio.go:469] duration metric: took 2.135668164s to extract the tarball
	I0920 22:20:00.798922   64543 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:00.838877   64543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:00.884361   64543 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:20:00.884383   64543 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:20:00.884391   64543 kubeadm.go:934] updating node { 192.168.50.193 8443 v1.31.1 crio true true} ...
	I0920 22:20:00.884492   64543 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-562750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:00.884559   64543 ssh_runner.go:195] Run: crio config
	I0920 22:20:00.930933   64543 cni.go:84] Creating CNI manager for ""
	I0920 22:20:00.930987   64543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:00.931000   64543 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:00.931026   64543 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.193 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-562750 NodeName:embed-certs-562750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:20:00.931179   64543 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-562750"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:00.931237   64543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:20:00.941926   64543 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:00.942020   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:00.952488   64543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0920 22:20:00.970386   64543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:00.987879   64543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0920 22:20:01.006743   64543 ssh_runner.go:195] Run: grep 192.168.50.193	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:01.010865   64543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:01.023730   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:01.151649   64543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:01.169768   64543 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750 for IP: 192.168.50.193
	I0920 22:20:01.169804   64543 certs.go:194] generating shared ca certs ...
	I0920 22:20:01.169823   64543 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:01.170011   64543 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:01.170073   64543 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:01.170083   64543 certs.go:256] generating profile certs ...
	I0920 22:20:01.170195   64543 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/client.key
	I0920 22:20:01.170276   64543 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.key.1624b01a
	I0920 22:20:01.170322   64543 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.key
	I0920 22:20:01.170470   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:01.170511   64543 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:01.170524   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:01.170563   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:01.170597   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:01.170625   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:01.170675   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:01.171652   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:01.235898   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:01.276664   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:01.313610   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:01.346451   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 22:20:01.373805   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:20:01.401908   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:01.428305   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:01.460649   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:01.487664   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:01.513122   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:01.539736   64543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:01.559568   64543 ssh_runner.go:195] Run: openssl version
	I0920 22:20:01.565838   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:01.579013   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.584012   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.584090   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.590609   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:01.603863   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:01.616932   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.621762   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.621816   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.627823   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:01.640444   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:01.654236   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.659030   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.659087   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.665104   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:01.678337   64543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:01.683177   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:01.689492   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:01.695839   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:01.702517   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:01.709021   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:01.715624   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:01.721888   64543 kubeadm.go:392] StartCluster: {Name:embed-certs-562750 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:01.721981   64543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:01.722029   64543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:01.771458   64543 cri.go:89] found id: ""
	I0920 22:20:01.771526   64543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:01.785814   64543 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:01.785833   64543 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:01.785887   64543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:01.796192   64543 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:01.797197   64543 kubeconfig.go:125] found "embed-certs-562750" server: "https://192.168.50.193:8443"
	I0920 22:20:01.799185   64543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:01.809313   64543 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.193
	I0920 22:20:01.809345   64543 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:01.809373   64543 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:01.809437   64543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:01.846530   64543 cri.go:89] found id: ""
	I0920 22:20:01.846605   64543 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:01.864522   64543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:01.874792   64543 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:01.874817   64543 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:01.874877   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:20:01.884330   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:01.884401   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:01.893899   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:20:01.903119   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:01.903176   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:01.913163   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:20:01.924161   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:01.924216   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:01.935472   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:20:01.944837   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:01.944916   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:01.955372   64543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:01.966120   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:02.094557   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:00.703843   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:00.704393   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:00.704421   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:00.704320   66271 retry.go:31] will retry after 1.075959086s: waiting for machine to come up
	I0920 22:20:01.781352   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:01.781785   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:01.781815   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:01.781745   66271 retry.go:31] will retry after 1.355153165s: waiting for machine to come up
	I0920 22:20:03.139404   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:03.139900   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:03.139926   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:03.139858   66271 retry.go:31] will retry after 1.818194289s: waiting for machine to come up
	I0920 22:20:04.959670   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:04.960173   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:04.960242   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:04.960146   66271 retry.go:31] will retry after 2.18132962s: waiting for machine to come up
	I0920 22:20:03.143089   64543 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.048498511s)
	I0920 22:20:03.143125   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.364380   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.431644   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.547063   64543 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:03.547154   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.047320   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.548259   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.577405   64543 api_server.go:72] duration metric: took 1.03034068s to wait for apiserver process to appear ...
	I0920 22:20:04.577433   64543 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:20:04.577456   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.359356   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:07.359397   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:07.359415   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.433696   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:07.433729   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:07.578317   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.605834   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:07.605869   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:08.078218   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:08.083148   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:08.083174   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:08.577638   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:08.583009   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:08.583039   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:09.077549   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:09.081990   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 200:
	ok
	I0920 22:20:09.088232   64543 api_server.go:141] control plane version: v1.31.1
	I0920 22:20:09.088256   64543 api_server.go:131] duration metric: took 4.510816425s to wait for apiserver health ...
	I0920 22:20:09.088263   64543 cni.go:84] Creating CNI manager for ""
	I0920 22:20:09.088271   64543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:09.090388   64543 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:20:07.143268   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:07.143767   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:07.143806   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:07.143720   66271 retry.go:31] will retry after 3.514856392s: waiting for machine to come up
	I0920 22:20:09.091750   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:20:09.103171   64543 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:20:09.123617   64543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:20:09.135750   64543 system_pods.go:59] 8 kube-system pods found
	I0920 22:20:09.135778   64543 system_pods.go:61] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:20:09.135790   64543 system_pods.go:61] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:20:09.135800   64543 system_pods.go:61] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:20:09.135809   64543 system_pods.go:61] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:20:09.135815   64543 system_pods.go:61] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:20:09.135838   64543 system_pods.go:61] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:20:09.135843   64543 system_pods.go:61] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:20:09.135851   64543 system_pods.go:61] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:20:09.135864   64543 system_pods.go:74] duration metric: took 12.228085ms to wait for pod list to return data ...
	I0920 22:20:09.135870   64543 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:20:09.140570   64543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:20:09.140598   64543 node_conditions.go:123] node cpu capacity is 2
	I0920 22:20:09.140609   64543 node_conditions.go:105] duration metric: took 4.732339ms to run NodePressure ...
	I0920 22:20:09.140625   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:09.411017   64543 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:20:09.423634   64543 kubeadm.go:739] kubelet initialised
	I0920 22:20:09.423657   64543 kubeadm.go:740] duration metric: took 12.617105ms waiting for restarted kubelet to initialise ...
	I0920 22:20:09.423665   64543 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:09.427934   64543 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.438355   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.438377   64543 pod_ready.go:82] duration metric: took 10.418416ms for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.438386   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.438403   64543 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.447837   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "etcd-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.447868   64543 pod_ready.go:82] duration metric: took 9.456346ms for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.447877   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "etcd-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.447886   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.456304   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.456328   64543 pod_ready.go:82] duration metric: took 8.434885ms for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.456338   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.456344   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.528428   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.528451   64543 pod_ready.go:82] duration metric: took 72.099227ms for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.528460   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.528465   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.927405   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-proxy-wsrnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.927434   64543 pod_ready.go:82] duration metric: took 398.961394ms for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.927443   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-proxy-wsrnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.927449   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:10.327103   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.327128   64543 pod_ready.go:82] duration metric: took 399.672325ms for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:10.327140   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.327153   64543 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:10.727093   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.727121   64543 pod_ready.go:82] duration metric: took 399.958241ms for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:10.727130   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.727137   64543 pod_ready.go:39] duration metric: took 1.303464561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:10.727154   64543 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:20:10.741332   64543 ops.go:34] apiserver oom_adj: -16
	I0920 22:20:10.741356   64543 kubeadm.go:597] duration metric: took 8.955512402s to restartPrimaryControlPlane
	I0920 22:20:10.741367   64543 kubeadm.go:394] duration metric: took 9.019487096s to StartCluster
	I0920 22:20:10.741382   64543 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:10.741459   64543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:20:10.744903   64543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:10.745155   64543 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:20:10.745198   64543 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:20:10.745344   64543 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-562750"
	I0920 22:20:10.745366   64543 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-562750"
	W0920 22:20:10.745375   64543 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:20:10.745376   64543 addons.go:69] Setting metrics-server=true in profile "embed-certs-562750"
	I0920 22:20:10.745398   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.745397   64543 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:10.745409   64543 addons.go:234] Setting addon metrics-server=true in "embed-certs-562750"
	W0920 22:20:10.745419   64543 addons.go:243] addon metrics-server should already be in state true
	I0920 22:20:10.745344   64543 addons.go:69] Setting default-storageclass=true in profile "embed-certs-562750"
	I0920 22:20:10.745451   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.745468   64543 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-562750"
	I0920 22:20:10.745705   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745734   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745773   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.745853   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745943   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.746025   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.747255   64543 out.go:177] * Verifying Kubernetes components...
	I0920 22:20:10.749042   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:10.761119   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0920 22:20:10.761143   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0920 22:20:10.761143   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0920 22:20:10.761517   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.761598   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.761643   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.762025   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762040   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762150   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762157   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762169   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762172   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762352   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762535   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762545   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762611   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.763136   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.763137   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.763182   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.763208   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.766161   64543 addons.go:234] Setting addon default-storageclass=true in "embed-certs-562750"
	W0920 22:20:10.766184   64543 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:20:10.766215   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.766587   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.766645   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.779678   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
	I0920 22:20:10.780153   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.780716   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.780739   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.781116   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.781184   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45417
	I0920 22:20:10.781406   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.781548   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.782023   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.782037   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.782492   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.782496   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0920 22:20:10.782988   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.783141   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.783174   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.783312   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.783458   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.783479   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.783861   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.784058   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.785625   64543 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:10.785682   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.787202   64543 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:20:10.787263   64543 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:20:10.787276   64543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:20:10.787293   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.788800   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:20:10.788820   64543 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:20:10.788838   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.790560   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.791094   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.791126   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.791400   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.791591   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.791735   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.791873   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.792691   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.793146   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.793161   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.793440   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.793640   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.793776   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.793904   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.801231   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36363
	I0920 22:20:10.801612   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.802055   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.802078   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.802442   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.802720   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.804333   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.804544   64543 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:20:10.804561   64543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:20:10.804574   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.807883   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.808403   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.808425   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.808636   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.808809   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.808949   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.809102   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.974334   64543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:10.997919   64543 node_ready.go:35] waiting up to 6m0s for node "embed-certs-562750" to be "Ready" ...
	I0920 22:20:11.061767   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:20:11.079795   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:20:11.079817   64543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:20:11.097101   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:20:11.144947   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:20:11.144971   64543 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:20:11.217680   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:20:11.217704   64543 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:20:11.269307   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:20:12.132770   64543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070956267s)
	I0920 22:20:12.132827   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.132840   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.132840   64543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.035704985s)
	I0920 22:20:12.132882   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.132898   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133151   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133183   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133202   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133217   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.133221   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133225   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133248   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133262   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133285   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.133296   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133440   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133456   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133466   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133523   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133525   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133534   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.139359   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.139373   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.139612   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.139628   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.139643   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177345   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.177367   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.177669   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177689   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.177705   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.177715   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.177724   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.177933   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177974   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.177984   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.177998   64543 addons.go:475] Verifying addon metrics-server=true in "embed-certs-562750"
	I0920 22:20:12.180228   64543 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 22:20:12.181532   64543 addons.go:510] duration metric: took 1.436346895s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 22:20:15.124170   65164 start.go:364] duration metric: took 3m9.152575828s to acquireMachinesLock for "old-k8s-version-593412"
	I0920 22:20:15.124246   65164 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:20:15.124258   65164 fix.go:54] fixHost starting: 
	I0920 22:20:15.124618   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:15.124669   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:15.144438   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0920 22:20:15.144935   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:15.145417   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:20:15.145446   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:15.145757   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:15.145943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:15.146103   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetState
	I0920 22:20:15.147740   65164 fix.go:112] recreateIfNeeded on old-k8s-version-593412: state=Stopped err=<nil>
	I0920 22:20:15.147770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	W0920 22:20:15.147919   65164 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:20:15.150120   65164 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-593412" ...
	I0920 22:20:10.659920   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:10.660290   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:10.660324   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:10.660245   66271 retry.go:31] will retry after 3.009712634s: waiting for machine to come up
	I0920 22:20:13.672538   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.673101   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has current primary IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.673132   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Found IP for machine: 192.168.39.94
	I0920 22:20:13.673144   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Reserving static IP address...
	I0920 22:20:13.673601   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799783", mac: "52:54:00:30:89:df", ip: "192.168.39.94"} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.673657   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | skip adding static IP to network mk-default-k8s-diff-port-799783 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799783", mac: "52:54:00:30:89:df", ip: "192.168.39.94"}
	I0920 22:20:13.673673   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Reserved static IP address: 192.168.39.94
	I0920 22:20:13.673692   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for SSH to be available...
	I0920 22:20:13.673706   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Getting to WaitForSSH function...
	I0920 22:20:13.675591   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.675941   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.675961   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.676120   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Using SSH client type: external
	I0920 22:20:13.676150   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa (-rw-------)
	I0920 22:20:13.676183   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:13.676199   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | About to run SSH command:
	I0920 22:20:13.676212   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | exit 0
	I0920 22:20:13.803369   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:13.803683   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetConfigRaw
	I0920 22:20:13.804303   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:13.806544   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.806902   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.806932   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.807178   64930 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/config.json ...
	I0920 22:20:13.807442   64930 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:13.807465   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:13.807682   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:13.809783   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.810027   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.810049   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.810221   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:13.810472   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.810628   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.810727   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:13.810875   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:13.811088   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:13.811112   64930 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:13.923218   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:13.923249   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:13.923489   64930 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799783"
	I0920 22:20:13.923512   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:13.923657   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:13.926007   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.926408   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.926439   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.926613   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:13.926775   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.926915   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.927071   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:13.927227   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:13.927389   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:13.927401   64930 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799783 && echo "default-k8s-diff-port-799783" | sudo tee /etc/hostname
	I0920 22:20:14.058656   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799783
	
	I0920 22:20:14.058686   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.061487   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.061914   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.061950   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.062157   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.062352   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.062515   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.062675   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.062840   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:14.063061   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:14.063087   64930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799783/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:14.190755   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:14.190785   64930 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:14.190818   64930 buildroot.go:174] setting up certificates
	I0920 22:20:14.190831   64930 provision.go:84] configureAuth start
	I0920 22:20:14.190848   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:14.191114   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:14.194083   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.194504   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.194533   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.194705   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.197104   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.197415   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.197453   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.197649   64930 provision.go:143] copyHostCerts
	I0920 22:20:14.197710   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:14.197733   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:14.197797   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:14.197911   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:14.197924   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:14.197952   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:14.198025   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:14.198035   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:14.198062   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:14.198128   64930 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799783 san=[127.0.0.1 192.168.39.94 default-k8s-diff-port-799783 localhost minikube]
	I0920 22:20:14.450887   64930 provision.go:177] copyRemoteCerts
	I0920 22:20:14.450953   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:14.451010   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.453547   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.453849   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.453873   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.454064   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.454256   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.454424   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.454547   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:14.541380   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:14.566560   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 22:20:14.596742   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:20:14.629406   64930 provision.go:87] duration metric: took 438.556575ms to configureAuth
	I0920 22:20:14.629438   64930 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:14.629656   64930 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:14.629731   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.632710   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.633129   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.633160   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.633403   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.633638   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.633823   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.633968   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.634151   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:14.634380   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:14.634404   64930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:14.870302   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:14.870327   64930 machine.go:96] duration metric: took 1.062871025s to provisionDockerMachine
	I0920 22:20:14.870338   64930 start.go:293] postStartSetup for "default-k8s-diff-port-799783" (driver="kvm2")
	I0920 22:20:14.870348   64930 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:14.870363   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:14.870675   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:14.870703   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.873574   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.873946   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.873990   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.874110   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.874306   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.874460   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.874604   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:14.963054   64930 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:14.967939   64930 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:14.967973   64930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:14.968056   64930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:14.968155   64930 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:14.968292   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:14.978976   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:15.005989   64930 start.go:296] duration metric: took 135.637475ms for postStartSetup
	I0920 22:20:15.006028   64930 fix.go:56] duration metric: took 19.390035006s for fixHost
	I0920 22:20:15.006046   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.008572   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.008972   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.009003   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.009176   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.009355   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.009516   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.009637   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.009793   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:15.009948   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:15.009958   64930 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:15.124032   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870815.095821045
	
	I0920 22:20:15.124057   64930 fix.go:216] guest clock: 1726870815.095821045
	I0920 22:20:15.124067   64930 fix.go:229] Guest: 2024-09-20 22:20:15.095821045 +0000 UTC Remote: 2024-09-20 22:20:15.006032337 +0000 UTC m=+209.748283721 (delta=89.788708ms)
	I0920 22:20:15.124093   64930 fix.go:200] guest clock delta is within tolerance: 89.788708ms
	I0920 22:20:15.124099   64930 start.go:83] releasing machines lock for "default-k8s-diff-port-799783", held for 19.50814158s
	I0920 22:20:15.124126   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.124385   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:15.127159   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.127522   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.127553   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.127724   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128290   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128438   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128550   64930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:15.128593   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.128689   64930 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:15.128707   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.131485   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.131862   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.131934   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.131963   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.132143   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.132343   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.132377   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.132418   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.132561   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.132624   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.132703   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.132789   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:15.132812   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.132967   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:15.239910   64930 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:15.247781   64930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:15.151593   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .Start
	I0920 22:20:15.151807   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring networks are active...
	I0920 22:20:15.152666   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network default is active
	I0920 22:20:15.153017   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network mk-old-k8s-version-593412 is active
	I0920 22:20:15.153520   65164 main.go:141] libmachine: (old-k8s-version-593412) Getting domain xml...
	I0920 22:20:15.154306   65164 main.go:141] libmachine: (old-k8s-version-593412) Creating domain...
	I0920 22:20:15.405701   64930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:15.413454   64930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:15.413546   64930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:15.431606   64930 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:15.431631   64930 start.go:495] detecting cgroup driver to use...
	I0920 22:20:15.431698   64930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:15.450077   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:15.464156   64930 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:15.464229   64930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:15.483489   64930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:15.498873   64930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:15.614354   64930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:15.817412   64930 docker.go:233] disabling docker service ...
	I0920 22:20:15.817495   64930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:15.833184   64930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:15.849521   64930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:15.997696   64930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:16.138329   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:16.153302   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:16.172896   64930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:20:16.172967   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.184383   64930 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:16.184448   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.196869   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.208295   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.220098   64930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:16.231777   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.243180   64930 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.263881   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.275250   64930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:16.291764   64930 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:16.291839   64930 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:16.307082   64930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:16.318243   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:16.459321   64930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:16.563304   64930 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:16.563369   64930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:16.568520   64930 start.go:563] Will wait 60s for crictl version
	I0920 22:20:16.568586   64930 ssh_runner.go:195] Run: which crictl
	I0920 22:20:16.573573   64930 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:16.617596   64930 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:16.617692   64930 ssh_runner.go:195] Run: crio --version
	I0920 22:20:16.656700   64930 ssh_runner.go:195] Run: crio --version
	I0920 22:20:16.691226   64930 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:20:13.001784   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:15.502542   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:16.692645   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:16.695792   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:16.696162   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:16.696189   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:16.696457   64930 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:16.700827   64930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:16.713577   64930 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-799783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:16.713724   64930 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:20:16.713798   64930 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:16.753538   64930 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:20:16.753621   64930 ssh_runner.go:195] Run: which lz4
	I0920 22:20:16.758261   64930 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:20:16.763234   64930 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:20:16.763282   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 22:20:18.277256   64930 crio.go:462] duration metric: took 1.519010373s to copy over tarball
	I0920 22:20:18.277326   64930 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:16.512533   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting to get IP...
	I0920 22:20:16.513609   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.514030   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.514105   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.514023   66497 retry.go:31] will retry after 283.450738ms: waiting for machine to come up
	I0920 22:20:16.799590   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.800093   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.800125   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.800032   66497 retry.go:31] will retry after 325.332719ms: waiting for machine to come up
	I0920 22:20:17.127313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.127784   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.127809   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.127762   66497 retry.go:31] will retry after 407.300726ms: waiting for machine to come up
	I0920 22:20:17.537352   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.537853   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.537884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.537804   66497 retry.go:31] will retry after 376.483945ms: waiting for machine to come up
	I0920 22:20:17.916227   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.916679   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.916706   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.916646   66497 retry.go:31] will retry after 471.842767ms: waiting for machine to come up
	I0920 22:20:18.390326   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:18.390817   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:18.390862   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:18.390788   66497 retry.go:31] will retry after 897.259517ms: waiting for machine to come up
	I0920 22:20:19.289834   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:19.290347   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:19.290376   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:19.290289   66497 retry.go:31] will retry after 1.172691106s: waiting for machine to come up
	I0920 22:20:20.464050   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:20.464487   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:20.464509   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:20.464443   66497 retry.go:31] will retry after 1.205371228s: waiting for machine to come up
	I0920 22:20:17.502934   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:18.003595   64543 node_ready.go:49] node "embed-certs-562750" has status "Ready":"True"
	I0920 22:20:18.003623   64543 node_ready.go:38] duration metric: took 7.005667163s for node "embed-certs-562750" to be "Ready" ...
	I0920 22:20:18.003635   64543 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:18.018682   64543 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:20.027253   64543 pod_ready.go:103] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:20.404573   64930 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127218766s)
	I0920 22:20:20.404602   64930 crio.go:469] duration metric: took 2.127314313s to extract the tarball
	I0920 22:20:20.404613   64930 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:20.443060   64930 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:20.486174   64930 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:20:20.486200   64930 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:20:20.486210   64930 kubeadm.go:934] updating node { 192.168.39.94 8444 v1.31.1 crio true true} ...
	I0920 22:20:20.486327   64930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-799783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:20.486422   64930 ssh_runner.go:195] Run: crio config
	I0920 22:20:20.539117   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:20:20.539141   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:20.539151   64930 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:20.539184   64930 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-799783 NodeName:default-k8s-diff-port-799783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:20:20.539375   64930 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-799783"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:20.539450   64930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:20:20.552286   64930 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:20.552358   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:20.563964   64930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0920 22:20:20.582857   64930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:20.601602   64930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0920 22:20:20.619624   64930 ssh_runner.go:195] Run: grep 192.168.39.94	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:20.623667   64930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:20.636950   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:20.758158   64930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:20.776263   64930 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783 for IP: 192.168.39.94
	I0920 22:20:20.776291   64930 certs.go:194] generating shared ca certs ...
	I0920 22:20:20.776307   64930 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:20.776465   64930 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:20.776529   64930 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:20.776543   64930 certs.go:256] generating profile certs ...
	I0920 22:20:20.776644   64930 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.key
	I0920 22:20:20.776722   64930 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.key.a6c75af2
	I0920 22:20:20.776787   64930 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.key
	I0920 22:20:20.776952   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:20.776999   64930 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:20.777013   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:20.777050   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:20.777083   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:20.777121   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:20.777174   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:20.777796   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:20.829458   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:20.879008   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:20.914899   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:20.945905   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 22:20:20.981144   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:20:21.011815   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:21.037214   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:21.062338   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:21.088458   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:21.115033   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:21.138611   64930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:21.156547   64930 ssh_runner.go:195] Run: openssl version
	I0920 22:20:21.162710   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:21.175725   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.180671   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.180733   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.186897   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:21.199980   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:21.212840   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.217656   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.217710   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.223561   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:21.236627   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:21.249706   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.254627   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.254693   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.260555   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:21.272330   64930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:21.277146   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:21.283637   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:21.289555   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:21.296395   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:21.302879   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:21.309163   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:21.315778   64930 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-799783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:21.315890   64930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:21.315953   64930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:21.358833   64930 cri.go:89] found id: ""
	I0920 22:20:21.358895   64930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:21.370044   64930 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:21.370067   64930 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:21.370110   64930 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:21.380782   64930 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:21.381858   64930 kubeconfig.go:125] found "default-k8s-diff-port-799783" server: "https://192.168.39.94:8444"
	I0920 22:20:21.383864   64930 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:21.394623   64930 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.94
	I0920 22:20:21.394659   64930 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:21.394692   64930 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:21.394781   64930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:21.434763   64930 cri.go:89] found id: ""
	I0920 22:20:21.434837   64930 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:21.452364   64930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:21.463047   64930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:21.463069   64930 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:21.463120   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 22:20:21.473382   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:21.473435   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:21.483948   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 22:20:21.494051   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:21.494116   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:21.504392   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 22:20:21.514513   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:21.514586   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:21.525767   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 22:20:21.535959   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:21.536042   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:21.546544   64930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:21.559961   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:21.700177   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.039526   64930 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.339307973s)
	I0920 22:20:23.039563   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.240821   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.322946   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.419008   64930 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:23.419163   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:23.919775   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:24.419574   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:24.458386   64930 api_server.go:72] duration metric: took 1.039374058s to wait for apiserver process to appear ...
	I0920 22:20:24.458431   64930 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:20:24.458457   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:24.459045   64930 api_server.go:269] stopped: https://192.168.39.94:8444/healthz: Get "https://192.168.39.94:8444/healthz": dial tcp 192.168.39.94:8444: connect: connection refused
	I0920 22:20:24.958794   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:21.671904   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:21.672382   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:21.672412   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:21.672336   66497 retry.go:31] will retry after 1.261816036s: waiting for machine to come up
	I0920 22:20:22.935576   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:22.936165   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:22.936194   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:22.936101   66497 retry.go:31] will retry after 2.160717711s: waiting for machine to come up
	I0920 22:20:25.098788   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:25.099242   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:25.099264   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:25.099201   66497 retry.go:31] will retry after 2.399420445s: waiting for machine to come up
	I0920 22:20:22.764833   64543 pod_ready.go:103] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:25.030167   64543 pod_ready.go:93] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.030195   64543 pod_ready.go:82] duration metric: took 7.011481892s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.030207   64543 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.039905   64543 pod_ready.go:93] pod "etcd-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.039987   64543 pod_ready.go:82] duration metric: took 9.771571ms for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.040012   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.049855   64543 pod_ready.go:93] pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.049880   64543 pod_ready.go:82] duration metric: took 9.849676ms for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.049891   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.056105   64543 pod_ready.go:93] pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.056128   64543 pod_ready.go:82] duration metric: took 6.228132ms for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.056139   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.061206   64543 pod_ready.go:93] pod "kube-proxy-wsrnw" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.061227   64543 pod_ready.go:82] duration metric: took 5.080531ms for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.061239   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.423902   64543 pod_ready.go:93] pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.423928   64543 pod_ready.go:82] duration metric: took 362.681392ms for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.423941   64543 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:27.432293   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:27.551115   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:27.551160   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:27.551182   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:27.582342   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:27.582378   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:27.958626   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:27.964667   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:27.964700   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:28.459321   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:28.463795   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:28.463829   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:28.959398   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:28.965916   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 200:
	ok
	I0920 22:20:28.978561   64930 api_server.go:141] control plane version: v1.31.1
	I0920 22:20:28.978588   64930 api_server.go:131] duration metric: took 4.520149944s to wait for apiserver health ...
	I0920 22:20:28.978597   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:20:28.978612   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:28.980359   64930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:20:28.981786   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:20:28.999040   64930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:20:29.044999   64930 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:20:29.061533   64930 system_pods.go:59] 8 kube-system pods found
	I0920 22:20:29.061578   64930 system_pods.go:61] "coredns-7c65d6cfc9-nl65c" [a5d6f5c6-aeef-4f0e-8a96-dcddc7d261d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:20:29.061592   64930 system_pods.go:61] "etcd-default-k8s-diff-port-799783" [e23bc34a-f559-434e-bc61-30f010efe60b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:20:29.061603   64930 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799783" [acbaa020-f0e3-4e35-a4ab-28de39620eee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:20:29.061610   64930 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799783" [4f08a0c8-9f92-44ba-a0ae-3f61f50bad54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:20:29.061620   64930 system_pods.go:61] "kube-proxy-mqkfs" [8fe12e1f-7fdc-4bfc-8d11-3ef08212c17c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 22:20:29.061633   64930 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799783" [8373aa94-7b16-4f5a-ab26-bf778e76cecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:20:29.061641   64930 system_pods.go:61] "metrics-server-6867b74b74-4xsdw" [663691e2-a038-4ba4-b0ee-d7e1e88b9c05] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:20:29.061652   64930 system_pods.go:61] "storage-provisioner" [95a6ef93-5fdd-465b-8d34-7b97d56af757] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 22:20:29.061662   64930 system_pods.go:74] duration metric: took 16.629791ms to wait for pod list to return data ...
	I0920 22:20:29.061674   64930 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:20:29.066297   64930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:20:29.066329   64930 node_conditions.go:123] node cpu capacity is 2
	I0920 22:20:29.066341   64930 node_conditions.go:105] duration metric: took 4.659117ms to run NodePressure ...
	I0920 22:20:29.066360   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:29.337598   64930 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:20:29.345606   64930 kubeadm.go:739] kubelet initialised
	I0920 22:20:29.345637   64930 kubeadm.go:740] duration metric: took 8.0125ms waiting for restarted kubelet to initialise ...
	I0920 22:20:29.345646   64930 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:29.353710   64930 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.359468   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.359507   64930 pod_ready.go:82] duration metric: took 5.763208ms for pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.359518   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.359529   64930 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.365960   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.365994   64930 pod_ready.go:82] duration metric: took 6.454066ms for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.366006   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.366023   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.372228   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.372258   64930 pod_ready.go:82] duration metric: took 6.225774ms for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.372270   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.372278   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.449937   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.449964   64930 pod_ready.go:82] duration metric: took 77.677417ms for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.449975   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.449981   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mqkfs" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.849933   64930 pod_ready.go:93] pod "kube-proxy-mqkfs" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:29.849957   64930 pod_ready.go:82] duration metric: took 399.969442ms for pod "kube-proxy-mqkfs" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.849971   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:27.500734   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:27.501210   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:27.501250   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:27.501187   66497 retry.go:31] will retry after 2.779302945s: waiting for machine to come up
	I0920 22:20:30.282179   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:30.282663   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:30.282693   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:30.282632   66497 retry.go:31] will retry after 3.57604454s: waiting for machine to come up
	I0920 22:20:29.930268   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:31.930604   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:35.104095   64076 start.go:364] duration metric: took 54.263062742s to acquireMachinesLock for "no-preload-032884"
	I0920 22:20:35.104165   64076 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:20:35.104176   64076 fix.go:54] fixHost starting: 
	I0920 22:20:35.104530   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:35.104559   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:35.121905   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0920 22:20:35.122295   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:35.122754   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:20:35.122775   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:35.123184   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:35.123393   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:35.123553   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:20:35.125092   64076 fix.go:112] recreateIfNeeded on no-preload-032884: state=Stopped err=<nil>
	I0920 22:20:35.125113   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	W0920 22:20:35.125267   64076 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:20:35.127162   64076 out.go:177] * Restarting existing kvm2 VM for "no-preload-032884" ...
	I0920 22:20:31.857961   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:34.358409   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:33.859962   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860480   65164 main.go:141] libmachine: (old-k8s-version-593412) Found IP for machine: 192.168.61.2
	I0920 22:20:33.860510   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has current primary IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860519   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserving static IP address...
	I0920 22:20:33.860912   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.860939   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | skip adding static IP to network mk-old-k8s-version-593412 - found existing host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"}
	I0920 22:20:33.860953   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserved static IP address: 192.168.61.2
	I0920 22:20:33.860970   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting for SSH to be available...
	I0920 22:20:33.860982   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Getting to WaitForSSH function...
	I0920 22:20:33.863316   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863665   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.863690   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863814   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH client type: external
	I0920 22:20:33.863852   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa (-rw-------)
	I0920 22:20:33.863884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:33.863898   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | About to run SSH command:
	I0920 22:20:33.863916   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | exit 0
	I0920 22:20:33.987115   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:33.987448   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetConfigRaw
	I0920 22:20:33.988170   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:33.990787   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991200   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.991232   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991450   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:20:33.991659   65164 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:33.991677   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:33.991880   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:33.994313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994683   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.994715   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994911   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:33.995094   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995194   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995344   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:33.995459   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:33.995641   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:33.995651   65164 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:34.104874   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:34.104900   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105125   65164 buildroot.go:166] provisioning hostname "old-k8s-version-593412"
	I0920 22:20:34.105140   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105271   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.108213   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108564   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.108595   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108754   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.108946   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109088   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109218   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.109384   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.109607   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.109624   65164 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-593412 && echo "old-k8s-version-593412" | sudo tee /etc/hostname
	I0920 22:20:34.236515   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-593412
	
	I0920 22:20:34.236547   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.239440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.239825   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239983   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.240211   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240372   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240534   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.240753   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.240978   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.241002   65164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-593412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-593412/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-593412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:34.364324   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:34.364351   65164 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:34.364401   65164 buildroot.go:174] setting up certificates
	I0920 22:20:34.364413   65164 provision.go:84] configureAuth start
	I0920 22:20:34.364426   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.364641   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:34.367650   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368065   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.368123   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368260   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.371062   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.371470   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371621   65164 provision.go:143] copyHostCerts
	I0920 22:20:34.371684   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:34.371699   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:34.371756   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:34.371859   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:34.371869   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:34.371889   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:34.371941   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:34.371949   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:34.371966   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:34.372024   65164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-593412 san=[127.0.0.1 192.168.61.2 localhost minikube old-k8s-version-593412]
	I0920 22:20:34.456453   65164 provision.go:177] copyRemoteCerts
	I0920 22:20:34.456512   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:34.456538   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.459471   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459810   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.459871   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459996   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.460225   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.460390   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.460538   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.547105   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:34.576231   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 22:20:34.603240   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 22:20:34.629295   65164 provision.go:87] duration metric: took 264.869688ms to configureAuth
	I0920 22:20:34.629320   65164 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:34.629521   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:20:34.629601   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.632577   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633006   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.633035   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633191   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.633391   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633552   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633682   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.633865   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.634029   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.634043   65164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:34.870369   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:34.870393   65164 machine.go:96] duration metric: took 878.72152ms to provisionDockerMachine
	I0920 22:20:34.870404   65164 start.go:293] postStartSetup for "old-k8s-version-593412" (driver="kvm2")
	I0920 22:20:34.870414   65164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:34.870446   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:34.870739   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:34.870770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.873626   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874069   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.874098   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874262   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.874477   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.874607   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.874731   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.958510   65164 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:34.962678   65164 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:34.962700   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:34.962769   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:34.962873   65164 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:34.963010   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:34.973114   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:34.997799   65164 start.go:296] duration metric: took 127.379618ms for postStartSetup
	I0920 22:20:34.997837   65164 fix.go:56] duration metric: took 19.87358027s for fixHost
	I0920 22:20:34.997861   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.000420   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000761   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.000786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000953   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.001142   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001332   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001492   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.001694   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:35.001900   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:35.001914   65164 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:35.103956   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870835.076325315
	
	I0920 22:20:35.103980   65164 fix.go:216] guest clock: 1726870835.076325315
	I0920 22:20:35.103991   65164 fix.go:229] Guest: 2024-09-20 22:20:35.076325315 +0000 UTC Remote: 2024-09-20 22:20:34.997841315 +0000 UTC m=+209.171100575 (delta=78.484ms)
	I0920 22:20:35.104021   65164 fix.go:200] guest clock delta is within tolerance: 78.484ms
	I0920 22:20:35.104031   65164 start.go:83] releasing machines lock for "old-k8s-version-593412", held for 19.979809523s
	I0920 22:20:35.104060   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.104327   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:35.107075   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107426   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.107463   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107619   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108087   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108221   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108323   65164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:35.108377   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.108430   65164 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:35.108455   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.111005   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111107   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111377   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111413   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111456   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111584   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111638   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111760   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111795   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111939   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.111943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.189592   65164 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:35.213527   65164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:35.364081   65164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:35.370551   65164 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:35.370621   65164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:35.388107   65164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:35.388135   65164 start.go:495] detecting cgroup driver to use...
	I0920 22:20:35.388206   65164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:35.410531   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:35.425842   65164 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:35.425902   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:35.441753   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:35.457160   65164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:35.578452   65164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:35.754567   65164 docker.go:233] disabling docker service ...
	I0920 22:20:35.754653   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:35.772241   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:35.788874   65164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:35.911696   65164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:36.050305   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:36.065694   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:36.085619   65164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 22:20:36.085697   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.096527   65164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:36.096599   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.108150   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.121282   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.133335   65164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:36.145549   65164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:36.156370   65164 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:36.156429   65164 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:36.174821   65164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:36.188579   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:36.355272   65164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:36.460222   65164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:36.460280   65164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:36.466413   65164 start.go:563] Will wait 60s for crictl version
	I0920 22:20:36.466470   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:36.476263   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:36.519969   65164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:36.520061   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.551914   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.584210   65164 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 22:20:33.931343   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:36.430359   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:35.128483   64076 main.go:141] libmachine: (no-preload-032884) Calling .Start
	I0920 22:20:35.128679   64076 main.go:141] libmachine: (no-preload-032884) Ensuring networks are active...
	I0920 22:20:35.129407   64076 main.go:141] libmachine: (no-preload-032884) Ensuring network default is active
	I0920 22:20:35.129723   64076 main.go:141] libmachine: (no-preload-032884) Ensuring network mk-no-preload-032884 is active
	I0920 22:20:35.130101   64076 main.go:141] libmachine: (no-preload-032884) Getting domain xml...
	I0920 22:20:35.130878   64076 main.go:141] libmachine: (no-preload-032884) Creating domain...
	I0920 22:20:36.466500   64076 main.go:141] libmachine: (no-preload-032884) Waiting to get IP...
	I0920 22:20:36.467609   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:36.468084   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:36.468162   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:36.468053   66666 retry.go:31] will retry after 296.142224ms: waiting for machine to come up
	I0920 22:20:36.765402   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:36.765985   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:36.766019   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:36.765907   66666 retry.go:31] will retry after 264.44324ms: waiting for machine to come up
	I0920 22:20:37.032449   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.032913   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.032983   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.032871   66666 retry.go:31] will retry after 366.351809ms: waiting for machine to come up
	I0920 22:20:37.401339   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.401912   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.401942   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.401867   66666 retry.go:31] will retry after 447.432413ms: waiting for machine to come up
	I0920 22:20:37.851659   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.852267   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.852292   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.852177   66666 retry.go:31] will retry after 547.737945ms: waiting for machine to come up
	I0920 22:20:36.858146   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:38.863304   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:36.585309   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:36.588169   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588579   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:36.588608   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588830   65164 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:36.593202   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:36.608192   65164 kubeadm.go:883] updating cluster {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:36.608325   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:20:36.608384   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:36.667481   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:36.667551   65164 ssh_runner.go:195] Run: which lz4
	I0920 22:20:36.672136   65164 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:20:36.677043   65164 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:20:36.677075   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 22:20:38.446678   65164 crio.go:462] duration metric: took 1.774616281s to copy over tarball
	I0920 22:20:38.446757   65164 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:38.433307   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:40.932109   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:38.401406   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:38.401845   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:38.401872   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:38.401813   66666 retry.go:31] will retry after 768.751475ms: waiting for machine to come up
	I0920 22:20:39.171763   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:39.172262   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:39.172306   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:39.172196   66666 retry.go:31] will retry after 743.411926ms: waiting for machine to come up
	I0920 22:20:39.917169   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:39.917764   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:39.917788   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:39.917713   66666 retry.go:31] will retry after 962.106927ms: waiting for machine to come up
	I0920 22:20:40.881256   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:40.881826   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:40.881853   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:40.881761   66666 retry.go:31] will retry after 1.153833019s: waiting for machine to come up
	I0920 22:20:42.036986   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:42.037484   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:42.037507   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:42.037435   66666 retry.go:31] will retry after 2.265359794s: waiting for machine to come up
	I0920 22:20:41.358624   64930 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:41.358656   64930 pod_ready.go:82] duration metric: took 11.508676383s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:41.358675   64930 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:43.367487   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:41.624103   65164 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177314792s)
	I0920 22:20:41.624140   65164 crio.go:469] duration metric: took 3.177430277s to extract the tarball
	I0920 22:20:41.624150   65164 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:41.682195   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:41.728925   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:41.728954   65164 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:20:41.729037   65164 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.729082   65164 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 22:20:41.729130   65164 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.729039   65164 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.729053   65164 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.729064   65164 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731145   65164 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.731168   65164 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 22:20:41.731417   65164 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731618   65164 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.731649   65164 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.731764   65164 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.731827   65164 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.731883   65164 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.894612   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.895771   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.903674   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.909467   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.910830   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.926607   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.943477   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 22:20:42.011191   65164 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 22:20:42.011242   65164 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.011318   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.050009   65164 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 22:20:42.050293   65164 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.050375   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.093205   65164 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 22:20:42.093259   65164 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.093317   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106609   65164 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 22:20:42.106652   65164 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.106701   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106747   65164 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 22:20:42.106832   65164 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.106881   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106699   65164 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 22:20:42.106936   65164 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.107028   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117272   65164 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 22:20:42.117381   65164 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 22:20:42.117404   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.117463   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.117496   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.117427   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117331   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.117556   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.117617   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.261915   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.262025   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.261930   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.262132   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.262186   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.262224   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.262299   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.416018   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.416073   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.416134   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.437099   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.437281   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.437389   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.437543   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.498530   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:42.617564   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 22:20:42.617565   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 22:20:42.617665   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.617767   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 22:20:42.629455   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 22:20:42.629553   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 22:20:42.629638   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 22:20:42.769494   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 22:20:42.769571   65164 cache_images.go:92] duration metric: took 1.040600304s to LoadCachedImages
	W0920 22:20:42.769670   65164 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0920 22:20:42.769690   65164 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.20.0 crio true true} ...
	I0920 22:20:42.769834   65164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-593412 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:42.769930   65164 ssh_runner.go:195] Run: crio config
	I0920 22:20:42.826657   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:20:42.826684   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:42.826695   65164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:42.826713   65164 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-593412 NodeName:old-k8s-version-593412 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 22:20:42.826912   65164 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-593412"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:42.827017   65164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 22:20:42.836906   65164 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:42.836972   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:42.846776   65164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0920 22:20:42.867946   65164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:42.888646   65164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0920 22:20:42.908587   65164 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:42.913022   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:42.927492   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:43.074355   65164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:43.093712   65164 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412 for IP: 192.168.61.2
	I0920 22:20:43.093747   65164 certs.go:194] generating shared ca certs ...
	I0920 22:20:43.093768   65164 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.093938   65164 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:43.094003   65164 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:43.094016   65164 certs.go:256] generating profile certs ...
	I0920 22:20:43.094116   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.key
	I0920 22:20:43.094174   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key.2074c1da
	I0920 22:20:43.094287   65164 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key
	I0920 22:20:43.094440   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:43.094482   65164 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:43.094503   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:43.094532   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:43.094556   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:43.094575   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:43.094615   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:43.095441   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:43.139315   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:43.184809   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:43.224959   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:43.261692   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 22:20:43.304910   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:20:43.350211   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:43.383294   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:43.419400   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:43.450334   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:43.483749   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:43.514957   65164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:43.539734   65164 ssh_runner.go:195] Run: openssl version
	I0920 22:20:43.548836   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:43.564773   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571841   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571904   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.580432   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:43.596782   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:43.613747   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620172   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620241   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.628316   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:43.644084   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:43.656062   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661620   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661687   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.668857   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:43.681747   65164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:43.686842   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:43.694119   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:43.701308   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:43.710152   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:43.718080   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:43.725573   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:43.733563   65164 kubeadm.go:392] StartCluster: {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:43.733676   65164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:43.733753   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.779321   65164 cri.go:89] found id: ""
	I0920 22:20:43.779414   65164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:43.792290   65164 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:43.792308   65164 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:43.792354   65164 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:43.804754   65164 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:43.805793   65164 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-593412" does not appear in /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:20:43.807347   65164 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-9614/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-593412" cluster setting kubeconfig missing "old-k8s-version-593412" context setting]
	I0920 22:20:43.809671   65164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.812323   65164 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:43.824013   65164 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I0920 22:20:43.824057   65164 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:43.824072   65164 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:43.824154   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.866570   65164 cri.go:89] found id: ""
	I0920 22:20:43.866647   65164 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:43.885113   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:43.895941   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:43.895964   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:43.896070   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:20:43.906350   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:43.906427   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:43.917506   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:20:43.927802   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:43.927873   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:43.938454   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.949394   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:43.949459   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.960239   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:20:43.971275   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:43.971343   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:43.981773   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:43.992415   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:44.128979   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.239452   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.110430813s)
	I0920 22:20:45.239495   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.508589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.628021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.731122   65164 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:45.731219   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:42.960716   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:45.431334   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:47.431981   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:44.306025   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:44.306388   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:44.306418   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:44.306362   66666 retry.go:31] will retry after 1.811972294s: waiting for machine to come up
	I0920 22:20:46.119688   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:46.120257   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:46.120281   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:46.120206   66666 retry.go:31] will retry after 2.353824362s: waiting for machine to come up
	I0920 22:20:45.865875   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:48.366477   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:46.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:46.732091   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.232177   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.731967   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.231957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.731560   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.232211   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.731851   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.732174   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.930717   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:51.931624   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:48.476137   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:48.476630   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:48.476659   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:48.476585   66666 retry.go:31] will retry after 3.920442887s: waiting for machine to come up
	I0920 22:20:52.400058   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:52.400449   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:52.400471   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:52.400426   66666 retry.go:31] will retry after 3.798500794s: waiting for machine to come up
	I0920 22:20:50.865475   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:53.366485   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:51.231352   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:51.731345   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.231745   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.731622   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.231749   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.231457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.732036   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.232322   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.732142   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.430801   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.931991   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.200060   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.200518   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has current primary IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.200541   64076 main.go:141] libmachine: (no-preload-032884) Found IP for machine: 192.168.72.19
	I0920 22:20:56.200550   64076 main.go:141] libmachine: (no-preload-032884) Reserving static IP address...
	I0920 22:20:56.200939   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "no-preload-032884", mac: "52:54:00:a6:6f:3c", ip: "192.168.72.19"} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.200983   64076 main.go:141] libmachine: (no-preload-032884) DBG | skip adding static IP to network mk-no-preload-032884 - found existing host DHCP lease matching {name: "no-preload-032884", mac: "52:54:00:a6:6f:3c", ip: "192.168.72.19"}
	I0920 22:20:56.200999   64076 main.go:141] libmachine: (no-preload-032884) Reserved static IP address: 192.168.72.19
	I0920 22:20:56.201014   64076 main.go:141] libmachine: (no-preload-032884) Waiting for SSH to be available...
	I0920 22:20:56.201042   64076 main.go:141] libmachine: (no-preload-032884) DBG | Getting to WaitForSSH function...
	I0920 22:20:56.203351   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.203663   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.203695   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.203770   64076 main.go:141] libmachine: (no-preload-032884) DBG | Using SSH client type: external
	I0920 22:20:56.203790   64076 main.go:141] libmachine: (no-preload-032884) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa (-rw-------)
	I0920 22:20:56.203820   64076 main.go:141] libmachine: (no-preload-032884) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:56.203830   64076 main.go:141] libmachine: (no-preload-032884) DBG | About to run SSH command:
	I0920 22:20:56.203844   64076 main.go:141] libmachine: (no-preload-032884) DBG | exit 0
	I0920 22:20:56.331498   64076 main.go:141] libmachine: (no-preload-032884) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:56.331905   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetConfigRaw
	I0920 22:20:56.333047   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:56.335830   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.336198   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.336230   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.336462   64076 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/config.json ...
	I0920 22:20:56.336687   64076 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:56.336708   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:56.336901   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.339784   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.340171   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.340198   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.340373   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.340598   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.340754   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.340901   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.341056   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.341274   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.341295   64076 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:56.451538   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:56.451567   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.451783   64076 buildroot.go:166] provisioning hostname "no-preload-032884"
	I0920 22:20:56.451813   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.451994   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.454378   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.454747   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.454777   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.454905   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.455093   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.455216   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.455344   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.455549   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.455768   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.455791   64076 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-032884 && echo "no-preload-032884" | sudo tee /etc/hostname
	I0920 22:20:56.577189   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-032884
	
	I0920 22:20:56.577218   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.580006   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.580396   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.580419   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.580637   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.580821   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.580981   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.581093   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.581236   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.581447   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.581472   64076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-032884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-032884/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-032884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:56.697182   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:56.697211   64076 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:56.697237   64076 buildroot.go:174] setting up certificates
	I0920 22:20:56.697250   64076 provision.go:84] configureAuth start
	I0920 22:20:56.697268   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.697530   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:56.700284   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.700638   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.700669   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.700828   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.703088   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.703460   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.703490   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.703557   64076 provision.go:143] copyHostCerts
	I0920 22:20:56.703638   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:56.703658   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:56.703729   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:56.703846   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:56.703856   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:56.703888   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:56.703976   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:56.703986   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:56.704014   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:56.704100   64076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.no-preload-032884 san=[127.0.0.1 192.168.72.19 localhost minikube no-preload-032884]
	I0920 22:20:56.866984   64076 provision.go:177] copyRemoteCerts
	I0920 22:20:56.867061   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:56.867092   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.869468   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.869727   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.869753   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.869967   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.870146   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.870335   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.870460   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:56.957699   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 22:20:56.983450   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:20:57.010074   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:57.036805   64076 provision.go:87] duration metric: took 339.53481ms to configureAuth
	I0920 22:20:57.036831   64076 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:57.036994   64076 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:57.037101   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.040022   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.040443   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.040474   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.040688   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.040870   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.041024   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.041155   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.041351   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:57.041575   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:57.041597   64076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:57.264274   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:57.264302   64076 machine.go:96] duration metric: took 927.599257ms to provisionDockerMachine
	I0920 22:20:57.264315   64076 start.go:293] postStartSetup for "no-preload-032884" (driver="kvm2")
	I0920 22:20:57.264328   64076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:57.264353   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.264656   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:57.264689   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.267783   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.268229   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.268267   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.268455   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.268639   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.268786   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.268927   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.353832   64076 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:57.358202   64076 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:57.358226   64076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:57.358289   64076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:57.358364   64076 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:57.358445   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:57.368387   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:57.393307   64076 start.go:296] duration metric: took 128.976035ms for postStartSetup
	I0920 22:20:57.393353   64076 fix.go:56] duration metric: took 22.289177312s for fixHost
	I0920 22:20:57.393377   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.396200   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.396566   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.396586   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.396760   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.396953   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.397106   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.397220   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.397357   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:57.397563   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:57.397578   64076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:57.508079   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870857.462716254
	
	I0920 22:20:57.508102   64076 fix.go:216] guest clock: 1726870857.462716254
	I0920 22:20:57.508111   64076 fix.go:229] Guest: 2024-09-20 22:20:57.462716254 +0000 UTC Remote: 2024-09-20 22:20:57.393358015 +0000 UTC m=+359.143310428 (delta=69.358239ms)
	I0920 22:20:57.508136   64076 fix.go:200] guest clock delta is within tolerance: 69.358239ms
	I0920 22:20:57.508143   64076 start.go:83] releasing machines lock for "no-preload-032884", held for 22.404002843s
	I0920 22:20:57.508179   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.508481   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:57.511316   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.511660   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.511685   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.511832   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512291   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512438   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512499   64076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:57.512558   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.512687   64076 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:57.512714   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.515494   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515674   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515884   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.515921   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515962   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.516002   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.516189   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.516287   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.516370   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.516459   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.516509   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.516578   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.516639   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.516667   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.596100   64076 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:57.621437   64076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:57.765920   64076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:57.772259   64076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:57.772330   64076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:57.790230   64076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:57.790257   64076 start.go:495] detecting cgroup driver to use...
	I0920 22:20:57.790341   64076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:57.808916   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:57.824120   64076 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:57.824175   64076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:57.838655   64076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:57.853621   64076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:57.966670   64076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:58.108386   64076 docker.go:233] disabling docker service ...
	I0920 22:20:58.108456   64076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:58.124306   64076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:58.138341   64076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:58.292771   64076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:58.415026   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:58.431905   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:58.451166   64076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:20:58.451227   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.462234   64076 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:58.462319   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.473548   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.484592   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.495498   64076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:58.506876   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.517776   64076 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.536223   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.548104   64076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:58.558182   64076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:58.558258   64076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:58.573018   64076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:58.584883   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:58.715520   64076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:58.810495   64076 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:58.810565   64076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:58.815745   64076 start.go:563] Will wait 60s for crictl version
	I0920 22:20:58.815808   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:58.819904   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:58.866227   64076 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:58.866299   64076 ssh_runner.go:195] Run: crio --version
	I0920 22:20:58.895897   64076 ssh_runner.go:195] Run: crio --version
	I0920 22:20:58.930268   64076 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:20:55.865946   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:58.365557   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.231551   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:56.732184   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.231519   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.732306   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.231766   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.232250   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.731295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.732178   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.431393   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:01.930337   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:58.931424   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:58.934010   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:58.934261   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:58.934296   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:58.934479   64076 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:58.938795   64076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:58.952249   64076 kubeadm.go:883] updating cluster {Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:58.952359   64076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:20:58.952390   64076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:58.990874   64076 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:20:58.990895   64076 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:20:58.991004   64076 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:58.991004   64076 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:58.991042   64076 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:58.991051   64076 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 22:20:58.991027   64076 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:58.991070   64076 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:58.991094   64076 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:58.991320   64076 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:58.992446   64076 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:58.992445   64076 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:58.992529   64076 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:58.992537   64076 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:58.992888   64076 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.160630   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.162022   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.175945   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.176874   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.206536   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.221391   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.233905   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 22:20:59.279521   64076 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 22:20:59.279570   64076 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.279592   64076 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 22:20:59.279618   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.279624   64076 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.279667   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.320755   64076 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 22:20:59.320802   64076 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.320850   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.321041   64076 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 22:20:59.321076   64076 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.321121   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.339294   64076 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 22:20:59.339346   64076 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.339406   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.368217   64076 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 22:20:59.368271   64076 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.368321   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.462251   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.462299   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.462353   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.462395   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.462450   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.462477   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.603415   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.603435   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.603559   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.603611   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.603653   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.603690   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.740308   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.740311   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.740391   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.756577   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.756670   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.756711   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.825056   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:59.886185   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 22:20:59.886247   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 22:20:59.886320   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.886335   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:20:59.886200   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 22:20:59.886437   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:20:59.905891   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 22:20:59.905934   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 22:20:59.906012   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:20:59.906032   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 22:20:59.906037   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:20:59.906105   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:20:59.944041   64076 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 22:20:59.944085   64076 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:59.944092   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 22:20:59.944109   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.944115   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 22:20:59.944124   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.944156   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.944192   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 22:20:59.944232   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 22:20:59.944290   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 22:20:59.944305   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 22:21:02.634084   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.689904652s)
	I0920 22:21:02.634115   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 22:21:02.634134   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:21:02.634156   64076 ssh_runner.go:235] Completed: which crictl: (2.690013191s)
	I0920 22:21:02.634170   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:21:02.634206   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:00.365987   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:02.366056   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:04.865138   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:01.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:01.731557   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.232222   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.732161   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.231472   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.732256   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.231599   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.731956   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.231815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.731307   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.931240   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:05.932280   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:04.514463   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880270825s)
	I0920 22:21:04.514501   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 22:21:04.514506   64076 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.880277859s)
	I0920 22:21:04.514634   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:04.514530   64076 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:21:04.514708   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:21:07.903361   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.388614598s)
	I0920 22:21:07.903407   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 22:21:07.903447   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:21:07.903457   64076 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.388799941s)
	I0920 22:21:07.903508   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:21:07.903525   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:07.948935   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 22:21:07.949066   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:06.867117   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:09.365054   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:06.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:06.732185   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.232008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.732202   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.232026   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.732223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.231619   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.232303   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.731940   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.432619   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:10.432916   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:09.377740   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.47420389s)
	I0920 22:21:09.377777   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 22:21:09.377799   64076 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:21:09.377799   64076 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.428713149s)
	I0920 22:21:09.377821   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 22:21:09.377855   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:21:11.449954   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.072068983s)
	I0920 22:21:11.449989   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 22:21:11.450021   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:21:11.450070   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:21:11.366565   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:13.865931   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:11.231381   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:11.731874   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.232048   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.732168   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.232175   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.732162   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.231648   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.732176   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.232078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.732039   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.931619   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:14.931872   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:17.431853   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:13.420242   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.970150355s)
	I0920 22:21:13.420269   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 22:21:13.420292   64076 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:13.420327   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:14.070854   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 22:21:14.070905   64076 cache_images.go:123] Successfully loaded all cached images
	I0920 22:21:14.070914   64076 cache_images.go:92] duration metric: took 15.080004062s to LoadCachedImages
	I0920 22:21:14.070930   64076 kubeadm.go:934] updating node { 192.168.72.19 8443 v1.31.1 crio true true} ...
	I0920 22:21:14.071097   64076 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-032884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:21:14.071168   64076 ssh_runner.go:195] Run: crio config
	I0920 22:21:14.124217   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:21:14.124245   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:21:14.124254   64076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:21:14.124274   64076 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.19 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-032884 NodeName:no-preload-032884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:21:14.124421   64076 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-032884"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:21:14.124480   64076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:21:14.136928   64076 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:21:14.136997   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:21:14.147175   64076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 22:21:14.164546   64076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:21:14.181818   64076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0920 22:21:14.199098   64076 ssh_runner.go:195] Run: grep 192.168.72.19	control-plane.minikube.internal$ /etc/hosts
	I0920 22:21:14.203042   64076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:21:14.216620   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:21:14.344665   64076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:21:14.365604   64076 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884 for IP: 192.168.72.19
	I0920 22:21:14.365629   64076 certs.go:194] generating shared ca certs ...
	I0920 22:21:14.365649   64076 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:21:14.365827   64076 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:21:14.365885   64076 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:21:14.365899   64076 certs.go:256] generating profile certs ...
	I0920 22:21:14.366012   64076 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.key
	I0920 22:21:14.366104   64076 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.key.bbadb61c
	I0920 22:21:14.366162   64076 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.key
	I0920 22:21:14.366353   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:21:14.366400   64076 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:21:14.366414   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:21:14.366450   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:21:14.366489   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:21:14.366519   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:21:14.366575   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:21:14.367591   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:21:14.427412   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:21:14.462466   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:21:14.501761   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:21:14.536382   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 22:21:14.571781   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:21:14.598326   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:21:14.623017   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:21:14.648918   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:21:14.675898   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:21:14.701225   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:21:14.726996   64076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:21:14.748808   64076 ssh_runner.go:195] Run: openssl version
	I0920 22:21:14.755400   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:21:14.768456   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.774027   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.774096   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.780817   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:21:14.796292   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:21:14.810475   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.815697   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.815773   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.822267   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:21:14.834264   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:21:14.846103   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.850719   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.850783   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.856898   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:21:14.868699   64076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:21:14.873403   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:21:14.879636   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:21:14.885839   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:21:14.892035   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:21:14.897881   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:21:14.903783   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:21:14.909649   64076 kubeadm.go:392] StartCluster: {Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:21:14.909737   64076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:21:14.909780   64076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:21:14.953105   64076 cri.go:89] found id: ""
	I0920 22:21:14.953180   64076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:21:14.963403   64076 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:21:14.963423   64076 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:21:14.963471   64076 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:21:14.973488   64076 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:21:14.974547   64076 kubeconfig.go:125] found "no-preload-032884" server: "https://192.168.72.19:8443"
	I0920 22:21:14.976821   64076 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:21:14.986722   64076 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.19
	I0920 22:21:14.986757   64076 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:21:14.986770   64076 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:21:14.986842   64076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:21:15.023234   64076 cri.go:89] found id: ""
	I0920 22:21:15.023305   64076 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:21:15.041045   64076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:21:15.051274   64076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:21:15.051297   64076 kubeadm.go:157] found existing configuration files:
	
	I0920 22:21:15.051341   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:21:15.061647   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:21:15.061697   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:21:15.072071   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:21:15.081989   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:21:15.082043   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:21:15.092302   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:21:15.102314   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:21:15.102372   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:21:15.112160   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:21:15.121675   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:21:15.121733   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:21:15.132459   64076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:21:15.143233   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:15.261579   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:16.912557   64076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.650944403s)
	I0920 22:21:16.912593   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.142147   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.220002   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.345551   64076 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:21:17.345659   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.846789   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.365656   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:18.366203   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:16.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.731672   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.232244   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.731640   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.231400   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.231727   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.731713   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.231984   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.732224   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.346730   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.385972   64076 api_server.go:72] duration metric: took 1.040420208s to wait for apiserver process to appear ...
	I0920 22:21:18.385999   64076 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:21:18.386027   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:18.386479   64076 api_server.go:269] stopped: https://192.168.72.19:8443/healthz: Get "https://192.168.72.19:8443/healthz": dial tcp 192.168.72.19:8443: connect: connection refused
	I0920 22:21:18.886429   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.873610   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.873654   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:20.873666   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.955077   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.955105   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:20.955121   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.968034   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.968062   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:21.386680   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:21.391109   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:21:21.391148   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:21:21.886726   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:21.891765   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:21:21.891802   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:21:22.386330   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:22.394934   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 200:
	ok
	I0920 22:21:22.404568   64076 api_server.go:141] control plane version: v1.31.1
	I0920 22:21:22.404599   64076 api_server.go:131] duration metric: took 4.018587856s to wait for apiserver health ...
	I0920 22:21:22.404608   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:21:22.404614   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:21:22.406538   64076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:21:19.930612   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:21.931883   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:22.408344   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:21:22.424308   64076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:21:22.460962   64076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:21:22.477489   64076 system_pods.go:59] 8 kube-system pods found
	I0920 22:21:22.477609   64076 system_pods.go:61] "coredns-7c65d6cfc9-pdcjb" [9c28694b-a51e-416c-9abc-39e692aa2f28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:21:22.477652   64076 system_pods.go:61] "etcd-no-preload-032884" [ee560f0c-02e1-4694-af7c-64a928e77b4d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:21:22.477685   64076 system_pods.go:61] "kube-apiserver-no-preload-032884" [9861f45c-6e90-4756-b045-2d8e3426dc9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:21:22.477706   64076 system_pods.go:61] "kube-controller-manager-no-preload-032884" [f16002bd-2240-4645-a49b-ef9e8c0588e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:21:22.477727   64076 system_pods.go:61] "kube-proxy-mp98d" [75b05d31-33ee-407f-80dc-360b71c17cba] Running
	I0920 22:21:22.477746   64076 system_pods.go:61] "kube-scheduler-no-preload-032884" [72404a91-c915-4f6e-9dbb-d8df10564d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:21:22.477772   64076 system_pods.go:61] "metrics-server-6867b74b74-x9qjw" [ae39ba1f-b2b9-42dc-9f7c-ac44997798aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:21:22.477792   64076 system_pods.go:61] "storage-provisioner" [cbd15398-a74b-4010-8fb1-f90e13d4f682] Running
	I0920 22:21:22.477811   64076 system_pods.go:74] duration metric: took 16.828942ms to wait for pod list to return data ...
	I0920 22:21:22.477835   64076 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:21:22.486981   64076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:21:22.487019   64076 node_conditions.go:123] node cpu capacity is 2
	I0920 22:21:22.487033   64076 node_conditions.go:105] duration metric: took 9.17624ms to run NodePressure ...
	I0920 22:21:22.487056   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:22.780784   64076 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:21:22.785269   64076 kubeadm.go:739] kubelet initialised
	I0920 22:21:22.785300   64076 kubeadm.go:740] duration metric: took 4.478578ms waiting for restarted kubelet to initialise ...
	I0920 22:21:22.785310   64076 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:21:22.800811   64076 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.810781   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.810819   64076 pod_ready.go:82] duration metric: took 9.97275ms for pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.810832   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.810841   64076 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.821154   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "etcd-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.821183   64076 pod_ready.go:82] duration metric: took 10.330752ms for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.821194   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "etcd-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.821202   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.827310   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "kube-apiserver-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.827352   64076 pod_ready.go:82] duration metric: took 6.141087ms for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.827365   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "kube-apiserver-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.827374   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.870394   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.870419   64076 pod_ready.go:82] duration metric: took 43.034455ms for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.870428   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.870434   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mp98d" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:23.264677   64076 pod_ready.go:93] pod "kube-proxy-mp98d" in "kube-system" namespace has status "Ready":"True"
	I0920 22:21:23.264706   64076 pod_ready.go:82] duration metric: took 394.262896ms for pod "kube-proxy-mp98d" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:23.264719   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:20.866321   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:22.866504   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:21.232197   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:21.732280   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.232223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.731989   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.231835   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.731412   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.232159   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.731357   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.731688   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.432658   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:26.930043   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:25.272790   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:27.772961   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:25.366481   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:27.866181   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:26.232179   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:26.731338   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.232243   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.731637   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.231903   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.732246   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.231673   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.732195   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.231646   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.731750   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.930324   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:30.930675   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:29.775001   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:31.771168   64076 pod_ready.go:93] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:21:31.771198   64076 pod_ready.go:82] duration metric: took 8.506469796s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:31.771210   64076 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:30.365604   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:32.366379   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:34.865420   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:31.232139   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:31.732146   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.231399   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.231350   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.731518   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.231301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.731329   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.231639   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.731944   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.430342   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:35.430373   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:37.430529   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:33.779291   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.277659   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:38.278264   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.865728   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:39.364784   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.231341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:36.732138   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.732107   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.232187   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.231592   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.732193   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.232273   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.732121   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.931150   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:42.430690   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:40.778518   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:43.276961   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:41.368031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:43.865439   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:41.231524   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:41.731282   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.232204   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.732205   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.232173   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.732209   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.231528   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.732008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.231572   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.731279   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:45.731374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:45.771664   65164 cri.go:89] found id: ""
	I0920 22:21:45.771685   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.771692   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:45.771697   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:45.771753   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:45.805953   65164 cri.go:89] found id: ""
	I0920 22:21:45.805983   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.805994   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:45.806001   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:45.806070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:45.841111   65164 cri.go:89] found id: ""
	I0920 22:21:45.841140   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.841152   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:45.841159   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:45.841217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:44.430726   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:46.431384   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.277558   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:47.277869   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.865916   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:48.366375   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.877729   65164 cri.go:89] found id: ""
	I0920 22:21:45.877750   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.877758   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:45.877764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:45.877814   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:45.919400   65164 cri.go:89] found id: ""
	I0920 22:21:45.919429   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.919440   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:45.919447   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:45.919504   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:45.957169   65164 cri.go:89] found id: ""
	I0920 22:21:45.957199   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.957220   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:45.957233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:45.957289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:46.001079   65164 cri.go:89] found id: ""
	I0920 22:21:46.001110   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.001121   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:46.001128   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:46.001194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:46.037774   65164 cri.go:89] found id: ""
	I0920 22:21:46.037806   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.037818   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:46.037830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:46.037843   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:46.079990   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:46.080016   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:46.129685   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:46.129718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:46.144045   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:46.144078   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:46.282272   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:46.282303   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:46.282318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:48.855633   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:48.871768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:48.871829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:48.911870   65164 cri.go:89] found id: ""
	I0920 22:21:48.911901   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.911913   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:48.911923   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:48.911991   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:48.950456   65164 cri.go:89] found id: ""
	I0920 22:21:48.950481   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.950492   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:48.950500   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:48.950562   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:48.988787   65164 cri.go:89] found id: ""
	I0920 22:21:48.988817   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.988827   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:48.988835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:48.988883   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:49.028550   65164 cri.go:89] found id: ""
	I0920 22:21:49.028580   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.028590   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:49.028600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:49.028659   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:49.066926   65164 cri.go:89] found id: ""
	I0920 22:21:49.066951   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.066977   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:49.066986   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:49.067047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:49.106777   65164 cri.go:89] found id: ""
	I0920 22:21:49.106807   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.106816   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:49.106822   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:49.106879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:49.143823   65164 cri.go:89] found id: ""
	I0920 22:21:49.143856   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.143867   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:49.143875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:49.143935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:49.181471   65164 cri.go:89] found id: ""
	I0920 22:21:49.181496   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.181504   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:49.181512   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:49.181523   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:49.266480   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:49.266517   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:49.326247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:49.326273   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:49.387982   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:49.388022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:49.404125   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:49.404157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:49.485426   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:48.932370   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.430552   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:49.278734   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.777609   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:50.865089   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:53.365218   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.985703   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:51.999662   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:51.999723   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:52.035758   65164 cri.go:89] found id: ""
	I0920 22:21:52.035790   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.035800   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:52.035808   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:52.035865   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:52.082846   65164 cri.go:89] found id: ""
	I0920 22:21:52.082869   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.082877   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:52.082883   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:52.082954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:52.119258   65164 cri.go:89] found id: ""
	I0920 22:21:52.119299   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.119311   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:52.119319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:52.119380   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:52.155609   65164 cri.go:89] found id: ""
	I0920 22:21:52.155635   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.155645   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:52.155652   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:52.155710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:52.191493   65164 cri.go:89] found id: ""
	I0920 22:21:52.191519   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.191527   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:52.191534   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:52.191608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:52.227088   65164 cri.go:89] found id: ""
	I0920 22:21:52.227121   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.227131   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:52.227138   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:52.227204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:52.264503   65164 cri.go:89] found id: ""
	I0920 22:21:52.264529   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.264538   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:52.264545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:52.264611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:52.302158   65164 cri.go:89] found id: ""
	I0920 22:21:52.302185   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.302193   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:52.302202   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:52.302217   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:52.356130   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:52.356168   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:52.373072   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:52.373096   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:52.446415   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:52.446437   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:52.446454   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:52.527964   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:52.528009   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.077589   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:55.091560   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:55.091615   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:55.127894   65164 cri.go:89] found id: ""
	I0920 22:21:55.127922   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.127949   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:55.127971   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:55.128030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:55.162522   65164 cri.go:89] found id: ""
	I0920 22:21:55.162546   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.162554   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:55.162562   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:55.162617   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:55.199445   65164 cri.go:89] found id: ""
	I0920 22:21:55.199470   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.199480   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:55.199488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:55.199546   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:55.239316   65164 cri.go:89] found id: ""
	I0920 22:21:55.239340   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.239350   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:55.239357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:55.239418   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:55.278165   65164 cri.go:89] found id: ""
	I0920 22:21:55.278194   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.278204   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:55.278212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:55.278270   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:55.313143   65164 cri.go:89] found id: ""
	I0920 22:21:55.313173   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.313184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:55.313192   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:55.313261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:55.352661   65164 cri.go:89] found id: ""
	I0920 22:21:55.352691   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.352699   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:55.352704   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:55.352751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:55.390108   65164 cri.go:89] found id: ""
	I0920 22:21:55.390147   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.390158   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:55.390169   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:55.390184   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:55.403652   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:55.403680   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:55.490456   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:55.490477   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:55.490488   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:55.573164   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:55.573202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.615158   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:55.615188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:53.430808   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.431168   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:57.431504   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:53.778433   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.778670   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:58.278404   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.365277   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:57.365842   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:59.365976   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:58.165907   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:58.180329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:58.180397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:58.215920   65164 cri.go:89] found id: ""
	I0920 22:21:58.215946   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.215954   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:58.215973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:58.216030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:58.253178   65164 cri.go:89] found id: ""
	I0920 22:21:58.253211   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.253222   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:58.253229   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:58.253287   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:58.289345   65164 cri.go:89] found id: ""
	I0920 22:21:58.289373   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.289384   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:58.289391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:58.289451   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:58.326430   65164 cri.go:89] found id: ""
	I0920 22:21:58.326460   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.326468   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:58.326476   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:58.326535   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:58.362723   65164 cri.go:89] found id: ""
	I0920 22:21:58.362749   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.362761   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:58.362769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:58.362824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:58.398827   65164 cri.go:89] found id: ""
	I0920 22:21:58.398857   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.398867   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:58.398875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:58.398928   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:58.444833   65164 cri.go:89] found id: ""
	I0920 22:21:58.444864   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.444875   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:58.444883   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:58.444948   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:58.482692   65164 cri.go:89] found id: ""
	I0920 22:21:58.482723   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.482735   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:58.482745   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:58.482756   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:58.567138   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:58.567172   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:58.606392   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:58.606421   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:58.658568   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:58.658605   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:58.672830   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:58.672855   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:58.742602   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:59.930574   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:02.431132   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:00.279246   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:02.778434   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:01.865439   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:04.366915   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:01.243532   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:01.257408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:01.257493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:01.295259   65164 cri.go:89] found id: ""
	I0920 22:22:01.295284   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.295292   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:01.295297   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:01.295348   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:01.334119   65164 cri.go:89] found id: ""
	I0920 22:22:01.334145   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.334157   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:01.334164   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:01.334222   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:01.376955   65164 cri.go:89] found id: ""
	I0920 22:22:01.376985   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.376997   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:01.377005   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:01.377063   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:01.416680   65164 cri.go:89] found id: ""
	I0920 22:22:01.416703   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.416712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:01.416718   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:01.416783   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:01.455042   65164 cri.go:89] found id: ""
	I0920 22:22:01.455064   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.455072   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:01.455077   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:01.455135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:01.499243   65164 cri.go:89] found id: ""
	I0920 22:22:01.499271   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.499283   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:01.499290   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:01.499347   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:01.535757   65164 cri.go:89] found id: ""
	I0920 22:22:01.535793   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.535804   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:01.535812   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:01.535876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:01.574409   65164 cri.go:89] found id: ""
	I0920 22:22:01.574447   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.574459   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:01.574470   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:01.574485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:01.615629   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:01.615659   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:01.668446   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:01.668481   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:01.683044   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:01.683071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:01.751190   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:01.751210   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:01.751221   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.335881   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:04.351092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:04.351149   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:04.386306   65164 cri.go:89] found id: ""
	I0920 22:22:04.386337   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.386347   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:04.386365   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:04.386424   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:04.419440   65164 cri.go:89] found id: ""
	I0920 22:22:04.419472   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.419502   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:04.419511   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:04.419571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:04.456005   65164 cri.go:89] found id: ""
	I0920 22:22:04.456043   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.456054   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:04.456061   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:04.456125   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:04.490878   65164 cri.go:89] found id: ""
	I0920 22:22:04.490909   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.490919   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:04.490927   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:04.491000   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:04.527312   65164 cri.go:89] found id: ""
	I0920 22:22:04.527342   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.527353   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:04.527361   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:04.527441   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:04.564441   65164 cri.go:89] found id: ""
	I0920 22:22:04.564471   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.564481   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:04.564487   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:04.564542   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:04.598843   65164 cri.go:89] found id: ""
	I0920 22:22:04.598870   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.598878   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:04.598884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:04.598946   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:04.632575   65164 cri.go:89] found id: ""
	I0920 22:22:04.632604   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.632615   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:04.632625   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:04.632640   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:04.683629   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:04.683668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:04.697138   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:04.697169   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:04.768051   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:04.768076   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:04.768087   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.848054   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:04.848102   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:04.930934   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.431086   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:05.279145   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.778351   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:06.865475   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:08.865586   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.396960   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:07.410612   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:07.410681   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:07.450652   65164 cri.go:89] found id: ""
	I0920 22:22:07.450680   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.450688   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:07.450694   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:07.450739   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:07.491547   65164 cri.go:89] found id: ""
	I0920 22:22:07.491577   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.491588   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:07.491595   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:07.491652   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:07.532025   65164 cri.go:89] found id: ""
	I0920 22:22:07.532060   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.532069   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:07.532076   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:07.532135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:07.574139   65164 cri.go:89] found id: ""
	I0920 22:22:07.574175   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.574186   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:07.574194   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:07.574257   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:07.614572   65164 cri.go:89] found id: ""
	I0920 22:22:07.614599   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.614610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:07.614618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:07.614679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:07.651975   65164 cri.go:89] found id: ""
	I0920 22:22:07.652012   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.652022   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:07.652029   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:07.652091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:07.690273   65164 cri.go:89] found id: ""
	I0920 22:22:07.690300   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.690311   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:07.690319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:07.690379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:07.727144   65164 cri.go:89] found id: ""
	I0920 22:22:07.727169   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.727179   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:07.727189   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:07.727204   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:07.784607   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:07.784645   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:07.800082   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:07.800113   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:07.872842   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:07.872870   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:07.872885   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:07.953169   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:07.953205   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:10.496094   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:10.509645   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:10.509711   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:10.544590   65164 cri.go:89] found id: ""
	I0920 22:22:10.544620   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.544628   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:10.544634   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:10.544690   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:10.581062   65164 cri.go:89] found id: ""
	I0920 22:22:10.581091   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.581100   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:10.581106   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:10.581166   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:10.616337   65164 cri.go:89] found id: ""
	I0920 22:22:10.616363   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.616372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:10.616380   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:10.616438   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:10.655747   65164 cri.go:89] found id: ""
	I0920 22:22:10.655769   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.655777   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:10.655784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:10.655832   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:10.689158   65164 cri.go:89] found id: ""
	I0920 22:22:10.689188   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.689199   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:10.689207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:10.689269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:10.724321   65164 cri.go:89] found id: ""
	I0920 22:22:10.724355   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.724369   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:10.724376   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:10.724447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:10.764105   65164 cri.go:89] found id: ""
	I0920 22:22:10.764134   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.764143   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:10.764148   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:10.764217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:10.802620   65164 cri.go:89] found id: ""
	I0920 22:22:10.802650   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.802664   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:10.802674   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:10.802687   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:10.854739   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:10.854773   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:09.931917   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:12.430948   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.279412   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:12.778166   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.866822   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:13.366013   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.869974   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:10.869996   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:10.945364   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:10.945386   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:10.945399   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:11.029379   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:11.029419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:13.576924   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:13.591641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:13.591716   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:13.626683   65164 cri.go:89] found id: ""
	I0920 22:22:13.626710   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.626718   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:13.626724   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:13.626770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:13.666368   65164 cri.go:89] found id: ""
	I0920 22:22:13.666400   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.666408   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:13.666413   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:13.666469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:13.702596   65164 cri.go:89] found id: ""
	I0920 22:22:13.702618   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.702634   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:13.702642   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:13.702706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:13.736359   65164 cri.go:89] found id: ""
	I0920 22:22:13.736382   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.736390   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:13.736396   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:13.736446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:13.770584   65164 cri.go:89] found id: ""
	I0920 22:22:13.770615   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.770626   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:13.770633   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:13.770693   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:13.808637   65164 cri.go:89] found id: ""
	I0920 22:22:13.808668   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.808680   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:13.808688   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:13.808748   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:13.844429   65164 cri.go:89] found id: ""
	I0920 22:22:13.844476   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.844489   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:13.844497   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:13.844563   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:13.882730   65164 cri.go:89] found id: ""
	I0920 22:22:13.882759   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.882770   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:13.882778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:13.882790   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:13.939998   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:13.940028   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:13.954014   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:13.954043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:14.031991   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:14.032016   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:14.032030   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:14.109378   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:14.109413   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:14.431794   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:16.932980   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:15.277777   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:17.779835   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:15.865687   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:17.865735   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:19.866465   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:16.646761   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:16.660458   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:16.660522   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:16.695512   65164 cri.go:89] found id: ""
	I0920 22:22:16.695537   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.695545   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:16.695552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:16.695612   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:16.740479   65164 cri.go:89] found id: ""
	I0920 22:22:16.740508   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.740518   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:16.740529   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:16.740589   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:16.777259   65164 cri.go:89] found id: ""
	I0920 22:22:16.777291   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.777302   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:16.777309   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:16.777355   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:16.833157   65164 cri.go:89] found id: ""
	I0920 22:22:16.833187   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.833204   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:16.833212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:16.833272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:16.868589   65164 cri.go:89] found id: ""
	I0920 22:22:16.868618   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.868627   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:16.868635   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:16.868694   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:16.907229   65164 cri.go:89] found id: ""
	I0920 22:22:16.907257   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.907269   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:16.907277   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:16.907336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:16.945317   65164 cri.go:89] found id: ""
	I0920 22:22:16.945343   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.945351   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:16.945357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:16.945413   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:16.982159   65164 cri.go:89] found id: ""
	I0920 22:22:16.982183   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.982197   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:16.982206   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:16.982218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:17.034369   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:17.034416   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:17.049644   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:17.049677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:17.127816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:17.127840   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:17.127852   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:17.217213   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:17.217248   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:19.766957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:19.781545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:19.781627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:19.819616   65164 cri.go:89] found id: ""
	I0920 22:22:19.819642   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.819650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:19.819656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:19.819717   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:19.860983   65164 cri.go:89] found id: ""
	I0920 22:22:19.861009   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.861020   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:19.861027   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:19.861083   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:19.904051   65164 cri.go:89] found id: ""
	I0920 22:22:19.904075   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.904083   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:19.904088   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:19.904153   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:19.940246   65164 cri.go:89] found id: ""
	I0920 22:22:19.940273   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.940280   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:19.940286   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:19.940332   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:19.980380   65164 cri.go:89] found id: ""
	I0920 22:22:19.980404   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.980412   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:19.980418   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:19.980496   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:20.019603   65164 cri.go:89] found id: ""
	I0920 22:22:20.019625   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.019633   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:20.019639   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:20.019689   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:20.060667   65164 cri.go:89] found id: ""
	I0920 22:22:20.060696   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.060704   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:20.060709   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:20.060766   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:20.104432   65164 cri.go:89] found id: ""
	I0920 22:22:20.104464   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.104475   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:20.104486   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:20.104500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:20.148433   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:20.148466   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:20.202370   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:20.202407   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:20.216482   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:20.216515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:20.297053   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:20.297071   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:20.297086   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:18.936098   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:21.430878   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:20.277726   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.777347   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.366027   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:24.366241   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.879569   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:22.893624   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:22.893683   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:22.937850   65164 cri.go:89] found id: ""
	I0920 22:22:22.937878   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.937886   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:22.937896   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:22.937944   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:22.974939   65164 cri.go:89] found id: ""
	I0920 22:22:22.974971   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.974983   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:22.974990   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:22.975057   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:23.013689   65164 cri.go:89] found id: ""
	I0920 22:22:23.013723   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.013735   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:23.013742   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:23.013802   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:23.052092   65164 cri.go:89] found id: ""
	I0920 22:22:23.052123   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.052134   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:23.052141   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:23.052201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:23.092204   65164 cri.go:89] found id: ""
	I0920 22:22:23.092231   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.092248   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:23.092255   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:23.092317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:23.146265   65164 cri.go:89] found id: ""
	I0920 22:22:23.146294   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.146304   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:23.146312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:23.146373   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:23.183802   65164 cri.go:89] found id: ""
	I0920 22:22:23.183832   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.183843   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:23.183865   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:23.183932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:23.219604   65164 cri.go:89] found id: ""
	I0920 22:22:23.219633   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.219643   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:23.219654   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:23.219671   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:23.268138   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:23.268171   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:23.285309   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:23.285343   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:23.362940   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:23.362985   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:23.363006   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:23.450369   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:23.450411   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:23.431589   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:25.930565   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:24.778456   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:27.278474   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:26.866577   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:29.364505   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:26.001038   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:26.016893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:26.016956   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:26.053611   65164 cri.go:89] found id: ""
	I0920 22:22:26.053641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.053653   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:26.053661   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:26.053710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:26.093966   65164 cri.go:89] found id: ""
	I0920 22:22:26.093994   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.094005   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:26.094012   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:26.094077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:26.134321   65164 cri.go:89] found id: ""
	I0920 22:22:26.134360   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.134372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:26.134379   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:26.134442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:26.171597   65164 cri.go:89] found id: ""
	I0920 22:22:26.171641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.171651   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:26.171659   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:26.171730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:26.207751   65164 cri.go:89] found id: ""
	I0920 22:22:26.207778   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.207787   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:26.207793   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:26.207845   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:26.249174   65164 cri.go:89] found id: ""
	I0920 22:22:26.249201   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.249217   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:26.249225   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:26.249286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:26.284627   65164 cri.go:89] found id: ""
	I0920 22:22:26.284656   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.284666   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:26.284673   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:26.284729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:26.323942   65164 cri.go:89] found id: ""
	I0920 22:22:26.323968   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.323980   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:26.323991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:26.324005   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:26.379698   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:26.379737   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:26.394553   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:26.394584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:26.472301   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:26.472328   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:26.472354   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:26.549464   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:26.549509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.091838   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:29.115120   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:29.115198   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:29.161609   65164 cri.go:89] found id: ""
	I0920 22:22:29.161632   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.161640   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:29.161646   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:29.161709   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:29.197886   65164 cri.go:89] found id: ""
	I0920 22:22:29.197912   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.197920   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:29.197926   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:29.197978   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:29.236953   65164 cri.go:89] found id: ""
	I0920 22:22:29.236980   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.236991   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:29.236999   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:29.237059   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:29.273573   65164 cri.go:89] found id: ""
	I0920 22:22:29.273603   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.273614   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:29.273622   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:29.273679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:29.308752   65164 cri.go:89] found id: ""
	I0920 22:22:29.308776   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.308784   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:29.308789   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:29.308847   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:29.348578   65164 cri.go:89] found id: ""
	I0920 22:22:29.348608   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.348620   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:29.348627   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:29.348685   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:29.385184   65164 cri.go:89] found id: ""
	I0920 22:22:29.385225   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.385237   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:29.385246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:29.385308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:29.421935   65164 cri.go:89] found id: ""
	I0920 22:22:29.421968   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.421979   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:29.421990   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:29.422012   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:29.438574   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:29.438606   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:29.515334   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:29.515362   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:29.515376   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:29.602411   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:29.602451   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.646284   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:29.646320   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:27.931022   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:30.430935   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:32.431673   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:29.278850   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:31.778403   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:31.364592   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:33.864487   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:32.196699   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:32.211260   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:32.211338   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:32.248007   65164 cri.go:89] found id: ""
	I0920 22:22:32.248040   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.248052   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:32.248060   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:32.248122   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:32.285451   65164 cri.go:89] found id: ""
	I0920 22:22:32.285481   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.285490   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:32.285496   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:32.285548   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:32.321721   65164 cri.go:89] found id: ""
	I0920 22:22:32.321746   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.321756   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:32.321768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:32.321827   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:32.363669   65164 cri.go:89] found id: ""
	I0920 22:22:32.363698   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.363708   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:32.363717   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:32.363770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:32.402681   65164 cri.go:89] found id: ""
	I0920 22:22:32.402721   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.402729   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:32.402735   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:32.402786   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:32.445648   65164 cri.go:89] found id: ""
	I0920 22:22:32.445674   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.445684   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:32.445691   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:32.445750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:32.482291   65164 cri.go:89] found id: ""
	I0920 22:22:32.482315   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.482330   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:32.482337   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:32.482397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:32.523095   65164 cri.go:89] found id: ""
	I0920 22:22:32.523124   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.523134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:32.523145   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:32.523160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:32.537459   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:32.537489   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:32.614828   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:32.614853   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:32.614867   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:32.695690   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:32.695728   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:32.739620   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:32.739658   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:35.296323   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:35.312341   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:35.312417   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:35.349934   65164 cri.go:89] found id: ""
	I0920 22:22:35.349962   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.349972   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:35.349980   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:35.350052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:35.385427   65164 cri.go:89] found id: ""
	I0920 22:22:35.385457   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.385468   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:35.385475   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:35.385538   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:35.422118   65164 cri.go:89] found id: ""
	I0920 22:22:35.422146   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.422157   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:35.422165   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:35.422230   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:35.458552   65164 cri.go:89] found id: ""
	I0920 22:22:35.458573   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.458580   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:35.458586   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:35.458639   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:35.493575   65164 cri.go:89] found id: ""
	I0920 22:22:35.493601   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.493610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:35.493615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:35.493671   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:35.530316   65164 cri.go:89] found id: ""
	I0920 22:22:35.530342   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.530350   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:35.530356   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:35.530411   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:35.568247   65164 cri.go:89] found id: ""
	I0920 22:22:35.568279   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.568289   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:35.568303   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:35.568367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:35.605207   65164 cri.go:89] found id: ""
	I0920 22:22:35.605234   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.605242   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:35.605255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:35.605267   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:35.619771   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:35.619797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:35.699917   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:35.699939   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:35.699950   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:35.785310   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:35.785346   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:35.849112   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:35.849147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:34.930225   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:36.931840   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:34.277310   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:36.278180   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:35.866985   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.364684   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.405695   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:38.419499   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:38.419580   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:38.461618   65164 cri.go:89] found id: ""
	I0920 22:22:38.461642   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.461650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:38.461656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:38.461713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:38.504837   65164 cri.go:89] found id: ""
	I0920 22:22:38.504859   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.504867   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:38.504872   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:38.504923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:38.539399   65164 cri.go:89] found id: ""
	I0920 22:22:38.539434   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.539446   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:38.539453   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:38.539518   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:38.576129   65164 cri.go:89] found id: ""
	I0920 22:22:38.576155   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.576165   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:38.576173   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:38.576233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:38.611383   65164 cri.go:89] found id: ""
	I0920 22:22:38.611411   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.611423   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:38.611430   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:38.611491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:38.650553   65164 cri.go:89] found id: ""
	I0920 22:22:38.650583   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.650594   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:38.650602   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:38.650668   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:38.686734   65164 cri.go:89] found id: ""
	I0920 22:22:38.686756   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.686764   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:38.686769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:38.686829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:38.726268   65164 cri.go:89] found id: ""
	I0920 22:22:38.726301   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.726312   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:38.726323   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:38.726336   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:38.801995   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:38.802042   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:38.843775   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:38.843811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:38.897874   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:38.897909   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:38.912111   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:38.912136   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:38.989021   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:38.932311   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.432699   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.778194   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.277875   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:43.278108   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:40.364970   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:42.366255   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:44.865678   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.489341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:41.515667   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:41.515724   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:41.567019   65164 cri.go:89] found id: ""
	I0920 22:22:41.567047   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.567057   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:41.567066   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:41.567128   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:41.625316   65164 cri.go:89] found id: ""
	I0920 22:22:41.625347   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.625357   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:41.625371   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:41.625436   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:41.663420   65164 cri.go:89] found id: ""
	I0920 22:22:41.663448   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.663459   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:41.663467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:41.663529   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:41.701639   65164 cri.go:89] found id: ""
	I0920 22:22:41.701670   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.701682   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:41.701689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:41.701751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:41.739740   65164 cri.go:89] found id: ""
	I0920 22:22:41.739768   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.739778   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:41.739786   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:41.739843   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:41.781365   65164 cri.go:89] found id: ""
	I0920 22:22:41.781390   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.781401   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:41.781408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:41.781467   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:41.818708   65164 cri.go:89] found id: ""
	I0920 22:22:41.818732   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.818740   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:41.818746   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:41.818793   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:41.854193   65164 cri.go:89] found id: ""
	I0920 22:22:41.854226   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.854237   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:41.854249   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:41.854266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:41.923678   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:41.923701   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:41.923716   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:42.006009   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:42.006045   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:42.044745   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:42.044777   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:42.099462   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:42.099500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:44.614787   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:44.629100   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:44.629178   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:44.672347   65164 cri.go:89] found id: ""
	I0920 22:22:44.672381   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.672392   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:44.672401   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:44.672525   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:44.717617   65164 cri.go:89] found id: ""
	I0920 22:22:44.717646   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.717657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:44.717664   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:44.717730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:44.753035   65164 cri.go:89] found id: ""
	I0920 22:22:44.753062   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.753073   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:44.753092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:44.753165   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:44.790745   65164 cri.go:89] found id: ""
	I0920 22:22:44.790774   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.790785   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:44.790792   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:44.790859   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:44.828604   65164 cri.go:89] found id: ""
	I0920 22:22:44.828630   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.828640   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:44.828647   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:44.828706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:44.865982   65164 cri.go:89] found id: ""
	I0920 22:22:44.866008   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.866019   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:44.866026   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:44.866077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:44.903473   65164 cri.go:89] found id: ""
	I0920 22:22:44.903499   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.903508   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:44.903514   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:44.903571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:44.943481   65164 cri.go:89] found id: ""
	I0920 22:22:44.943509   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.943521   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:44.943530   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:44.943543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:44.996783   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:44.996823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:45.011531   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:45.011560   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:45.087225   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:45.087252   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:45.087266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:45.166460   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:45.166500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:43.931267   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:46.430497   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:45.778462   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.783057   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.365053   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:49.864478   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.709426   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:47.723507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:47.723590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:47.758203   65164 cri.go:89] found id: ""
	I0920 22:22:47.758228   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.758238   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:47.758246   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:47.758304   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:47.796098   65164 cri.go:89] found id: ""
	I0920 22:22:47.796129   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.796140   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:47.796148   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:47.796210   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:47.831860   65164 cri.go:89] found id: ""
	I0920 22:22:47.831889   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.831899   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:47.831906   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:47.831966   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:47.866241   65164 cri.go:89] found id: ""
	I0920 22:22:47.866266   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.866274   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:47.866281   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:47.866337   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:47.905029   65164 cri.go:89] found id: ""
	I0920 22:22:47.905055   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.905063   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:47.905073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:47.905124   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:47.942290   65164 cri.go:89] found id: ""
	I0920 22:22:47.942322   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.942332   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:47.942340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:47.942400   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:47.977901   65164 cri.go:89] found id: ""
	I0920 22:22:47.977930   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.977939   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:47.977946   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:47.978012   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:48.014682   65164 cri.go:89] found id: ""
	I0920 22:22:48.014715   65164 logs.go:276] 0 containers: []
	W0920 22:22:48.014726   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:48.014736   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:48.014749   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:48.067116   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:48.067160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:48.082926   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:48.082980   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:48.158209   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:48.158231   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:48.158243   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:48.239760   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:48.239866   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:50.788520   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:50.802410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:50.802485   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:50.844263   65164 cri.go:89] found id: ""
	I0920 22:22:50.844291   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.844303   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:50.844311   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:50.844367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:48.931174   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:51.431910   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:50.277510   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:52.279387   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:51.866270   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:54.366161   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:50.880271   65164 cri.go:89] found id: ""
	I0920 22:22:50.880301   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.880311   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:50.880324   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:50.880379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:50.915627   65164 cri.go:89] found id: ""
	I0920 22:22:50.915653   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.915663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:50.915671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:50.915725   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:50.951779   65164 cri.go:89] found id: ""
	I0920 22:22:50.951804   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.951823   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:50.951831   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:50.951879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:50.989174   65164 cri.go:89] found id: ""
	I0920 22:22:50.989201   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.989212   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:50.989219   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:50.989293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:51.024049   65164 cri.go:89] found id: ""
	I0920 22:22:51.024084   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.024094   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:51.024106   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:51.024167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:51.060837   65164 cri.go:89] found id: ""
	I0920 22:22:51.060865   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.060877   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:51.060884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:51.060934   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:51.099248   65164 cri.go:89] found id: ""
	I0920 22:22:51.099272   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.099285   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:51.099295   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:51.099308   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:51.148388   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:51.148420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:51.162735   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:51.162763   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:51.239020   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:51.239048   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:51.239063   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:51.318893   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:51.318934   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:53.860664   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:53.874434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:53.874491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:53.909449   65164 cri.go:89] found id: ""
	I0920 22:22:53.909475   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.909485   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:53.909493   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:53.909550   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:53.944367   65164 cri.go:89] found id: ""
	I0920 22:22:53.944395   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.944406   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:53.944414   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:53.944469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:53.980994   65164 cri.go:89] found id: ""
	I0920 22:22:53.981032   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.981043   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:53.981051   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:53.981110   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:54.018456   65164 cri.go:89] found id: ""
	I0920 22:22:54.018494   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.018506   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:54.018515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:54.018574   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:54.053763   65164 cri.go:89] found id: ""
	I0920 22:22:54.053793   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.053805   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:54.053813   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:54.053871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:54.091847   65164 cri.go:89] found id: ""
	I0920 22:22:54.091871   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.091883   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:54.091890   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:54.091951   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:54.126844   65164 cri.go:89] found id: ""
	I0920 22:22:54.126875   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.126885   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:54.126893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:54.126949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:54.168967   65164 cri.go:89] found id: ""
	I0920 22:22:54.168999   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.169010   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:54.169022   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:54.169043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:54.237771   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:54.237791   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:54.237806   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:54.319629   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:54.319668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:54.364470   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:54.364504   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:54.418798   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:54.418835   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:53.931126   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.430892   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:54.777269   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.777539   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.865494   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:59.365274   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.935104   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:56.948472   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:56.948545   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:56.983338   65164 cri.go:89] found id: ""
	I0920 22:22:56.983365   65164 logs.go:276] 0 containers: []
	W0920 22:22:56.983373   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:56.983379   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:56.983425   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:57.021781   65164 cri.go:89] found id: ""
	I0920 22:22:57.021809   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.021819   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:57.021825   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:57.021871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:57.057191   65164 cri.go:89] found id: ""
	I0920 22:22:57.057231   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.057239   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:57.057246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:57.057294   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:57.092567   65164 cri.go:89] found id: ""
	I0920 22:22:57.092595   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.092607   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:57.092615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:57.092669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:57.128098   65164 cri.go:89] found id: ""
	I0920 22:22:57.128125   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.128136   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:57.128143   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:57.128205   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:57.163001   65164 cri.go:89] found id: ""
	I0920 22:22:57.163029   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.163040   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:57.163048   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:57.163105   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:57.199392   65164 cri.go:89] found id: ""
	I0920 22:22:57.199424   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.199434   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:57.199442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:57.199491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:57.235132   65164 cri.go:89] found id: ""
	I0920 22:22:57.235155   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.235162   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:57.235170   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:57.235181   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:57.287112   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:57.287148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:57.301293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:57.301318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:57.379895   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:57.379920   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:57.379935   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:57.466022   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:57.466059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.006738   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:00.019871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:00.019949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:00.054127   65164 cri.go:89] found id: ""
	I0920 22:23:00.054155   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.054162   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:00.054168   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:00.054228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:00.089755   65164 cri.go:89] found id: ""
	I0920 22:23:00.089788   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.089800   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:00.089807   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:00.089867   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:00.125466   65164 cri.go:89] found id: ""
	I0920 22:23:00.125489   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.125497   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:00.125503   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:00.125559   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:00.166147   65164 cri.go:89] found id: ""
	I0920 22:23:00.166171   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.166178   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:00.166187   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:00.166248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:00.202752   65164 cri.go:89] found id: ""
	I0920 22:23:00.202781   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.202793   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:00.202801   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:00.202862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:00.240727   65164 cri.go:89] found id: ""
	I0920 22:23:00.240757   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.240768   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:00.240776   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:00.240833   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:00.277247   65164 cri.go:89] found id: ""
	I0920 22:23:00.277270   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.277278   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:00.277285   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:00.277341   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:00.313098   65164 cri.go:89] found id: ""
	I0920 22:23:00.313123   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.313134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:00.313144   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:00.313160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:00.328326   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:00.328353   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:00.397929   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:00.397961   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:00.397975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:00.479830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:00.479862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.519715   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:00.519745   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:58.929948   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:00.930012   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:58.778239   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:00.778442   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.279610   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:01.366521   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.366771   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.073360   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:03.088169   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:03.088236   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:03.129646   65164 cri.go:89] found id: ""
	I0920 22:23:03.129673   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.129682   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:03.129690   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:03.129751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:03.163771   65164 cri.go:89] found id: ""
	I0920 22:23:03.163801   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.163812   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:03.163820   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:03.163876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:03.201253   65164 cri.go:89] found id: ""
	I0920 22:23:03.201291   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.201299   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:03.201317   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:03.201374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:03.239623   65164 cri.go:89] found id: ""
	I0920 22:23:03.239652   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.239663   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:03.239670   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:03.239731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:03.275530   65164 cri.go:89] found id: ""
	I0920 22:23:03.275555   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.275564   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:03.275572   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:03.275631   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:03.312812   65164 cri.go:89] found id: ""
	I0920 22:23:03.312845   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.312855   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:03.312863   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:03.312923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:03.352975   65164 cri.go:89] found id: ""
	I0920 22:23:03.352998   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.353009   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:03.353016   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:03.353070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:03.390815   65164 cri.go:89] found id: ""
	I0920 22:23:03.390846   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.390856   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:03.390867   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:03.390880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:03.478231   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:03.478269   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:03.524991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:03.525024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:03.580114   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:03.580159   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:03.596120   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:03.596147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:03.671302   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:02.931281   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.429671   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:07.430177   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.781406   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:08.278876   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.367069   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:07.866058   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:09.866187   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:06.171484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:06.186410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:06.186480   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:06.222102   65164 cri.go:89] found id: ""
	I0920 22:23:06.222131   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.222139   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:06.222148   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:06.222206   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:06.257922   65164 cri.go:89] found id: ""
	I0920 22:23:06.257945   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.257953   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:06.257958   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:06.258006   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:06.299149   65164 cri.go:89] found id: ""
	I0920 22:23:06.299176   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.299184   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:06.299190   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:06.299244   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:06.336667   65164 cri.go:89] found id: ""
	I0920 22:23:06.336693   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.336703   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:06.336710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:06.336772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:06.379397   65164 cri.go:89] found id: ""
	I0920 22:23:06.379425   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.379435   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:06.379442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:06.379505   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:06.415945   65164 cri.go:89] found id: ""
	I0920 22:23:06.415976   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.415987   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:06.415993   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:06.416051   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:06.452902   65164 cri.go:89] found id: ""
	I0920 22:23:06.452932   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.452955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:06.452962   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:06.453033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:06.490284   65164 cri.go:89] found id: ""
	I0920 22:23:06.490314   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.490324   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:06.490334   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:06.490351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:06.529838   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:06.529862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:06.582421   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:06.582461   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:06.596504   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:06.596542   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:06.667816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:06.667836   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:06.667851   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.247952   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:09.264316   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:09.264386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:09.300577   65164 cri.go:89] found id: ""
	I0920 22:23:09.300612   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.300621   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:09.300627   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:09.300674   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:09.345945   65164 cri.go:89] found id: ""
	I0920 22:23:09.345972   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.345982   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:09.345991   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:09.346049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:09.385660   65164 cri.go:89] found id: ""
	I0920 22:23:09.385690   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.385700   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:09.385708   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:09.385775   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:09.422286   65164 cri.go:89] found id: ""
	I0920 22:23:09.422321   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.422332   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:09.422338   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:09.422397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:09.458377   65164 cri.go:89] found id: ""
	I0920 22:23:09.458402   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.458410   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:09.458416   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:09.458477   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:09.499470   65164 cri.go:89] found id: ""
	I0920 22:23:09.499498   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.499509   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:09.499517   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:09.499603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:09.535725   65164 cri.go:89] found id: ""
	I0920 22:23:09.535751   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.535759   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:09.535764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:09.535812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:09.575755   65164 cri.go:89] found id: ""
	I0920 22:23:09.575784   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.575795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:09.575807   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:09.575825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:09.590265   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:09.590294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:09.661559   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:09.661585   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:09.661617   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.741099   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:09.741143   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:09.785560   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:09.785589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:09.431051   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:11.930405   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:10.781647   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:13.278000   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:12.366060   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:14.864332   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:12.344085   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:12.358919   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:12.359021   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:12.397182   65164 cri.go:89] found id: ""
	I0920 22:23:12.397220   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.397229   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:12.397235   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:12.397293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:12.435133   65164 cri.go:89] found id: ""
	I0920 22:23:12.435162   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.435169   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:12.435177   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:12.435237   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:12.474391   65164 cri.go:89] found id: ""
	I0920 22:23:12.474416   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.474427   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:12.474434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:12.474493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:12.511992   65164 cri.go:89] found id: ""
	I0920 22:23:12.512015   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.512022   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:12.512028   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:12.512071   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:12.547248   65164 cri.go:89] found id: ""
	I0920 22:23:12.547269   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.547278   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:12.547289   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:12.547336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:12.582721   65164 cri.go:89] found id: ""
	I0920 22:23:12.582747   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.582759   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:12.582766   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:12.582837   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:12.620184   65164 cri.go:89] found id: ""
	I0920 22:23:12.620207   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.620215   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:12.620220   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:12.620272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:12.657416   65164 cri.go:89] found id: ""
	I0920 22:23:12.657441   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.657449   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:12.657458   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:12.657477   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:12.671189   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:12.671218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:12.742115   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:12.742139   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:12.742150   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:12.817551   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:12.817586   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:12.861940   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:12.861970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:15.413615   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:15.427127   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:15.427201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:15.462329   65164 cri.go:89] found id: ""
	I0920 22:23:15.462357   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.462366   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:15.462373   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:15.462430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:15.508791   65164 cri.go:89] found id: ""
	I0920 22:23:15.508816   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.508824   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:15.508834   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:15.508894   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:15.545381   65164 cri.go:89] found id: ""
	I0920 22:23:15.545403   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.545411   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:15.545417   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:15.545475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:15.585401   65164 cri.go:89] found id: ""
	I0920 22:23:15.585422   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.585430   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:15.585436   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:15.585494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:15.631507   65164 cri.go:89] found id: ""
	I0920 22:23:15.631529   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.631542   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:15.631549   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:15.631608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:15.671219   65164 cri.go:89] found id: ""
	I0920 22:23:15.671247   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.671270   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:15.671278   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:15.671333   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:15.711348   65164 cri.go:89] found id: ""
	I0920 22:23:15.711382   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.711393   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:15.711400   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:15.711463   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:15.750582   65164 cri.go:89] found id: ""
	I0920 22:23:15.750614   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.750624   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:15.750635   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:15.750649   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:15.766294   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:15.766321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:15.839079   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:15.839104   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:15.839117   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:14.430673   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:16.430824   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:15.278107   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:17.778678   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:16.866212   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:18.866838   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:15.920123   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:15.920156   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:15.964292   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:15.964321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.517484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:18.531117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:18.531192   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:18.567351   65164 cri.go:89] found id: ""
	I0920 22:23:18.567377   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.567389   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:18.567397   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:18.567447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:18.603319   65164 cri.go:89] found id: ""
	I0920 22:23:18.603344   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.603353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:18.603358   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:18.603408   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:18.639699   65164 cri.go:89] found id: ""
	I0920 22:23:18.639726   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.639737   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:18.639744   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:18.639806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:18.681880   65164 cri.go:89] found id: ""
	I0920 22:23:18.681910   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.681931   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:18.681939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:18.682002   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:18.716589   65164 cri.go:89] found id: ""
	I0920 22:23:18.716619   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.716629   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:18.716637   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:18.716696   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:18.755280   65164 cri.go:89] found id: ""
	I0920 22:23:18.755310   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.755321   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:18.755329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:18.755390   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:18.791915   65164 cri.go:89] found id: ""
	I0920 22:23:18.791944   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.791955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:18.791963   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:18.792033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:18.831544   65164 cri.go:89] found id: ""
	I0920 22:23:18.831568   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.831576   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:18.831584   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:18.831593   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.882024   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:18.882059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:18.898191   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:18.898226   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:18.970613   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:18.970643   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:18.970657   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:19.046112   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:19.046145   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:18.931618   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.431146   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:20.278885   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:22.777640   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.366031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:23.865349   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.586078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:21.599467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:21.599541   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:21.638708   65164 cri.go:89] found id: ""
	I0920 22:23:21.638733   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.638741   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:21.638747   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:21.638851   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:21.680546   65164 cri.go:89] found id: ""
	I0920 22:23:21.680574   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.680583   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:21.680589   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:21.680643   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:21.724122   65164 cri.go:89] found id: ""
	I0920 22:23:21.724149   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.724160   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:21.724167   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:21.724224   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:21.758447   65164 cri.go:89] found id: ""
	I0920 22:23:21.758474   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.758482   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:21.758488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:21.758554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:21.799562   65164 cri.go:89] found id: ""
	I0920 22:23:21.799586   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.799594   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:21.799600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:21.799658   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:21.836768   65164 cri.go:89] found id: ""
	I0920 22:23:21.836799   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.836808   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:21.836814   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:21.836871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:21.876750   65164 cri.go:89] found id: ""
	I0920 22:23:21.876770   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.876778   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:21.876784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:21.876873   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:21.912756   65164 cri.go:89] found id: ""
	I0920 22:23:21.912783   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.912792   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:21.912800   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:21.912813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:21.990168   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:21.990191   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:21.990202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:22.094757   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:22.094797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:22.152247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:22.152280   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:22.205505   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:22.205543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:24.720784   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:24.734305   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:24.734369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:24.771142   65164 cri.go:89] found id: ""
	I0920 22:23:24.771171   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.771182   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:24.771190   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:24.771248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:24.807182   65164 cri.go:89] found id: ""
	I0920 22:23:24.807208   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.807216   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:24.807221   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:24.807278   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:24.840805   65164 cri.go:89] found id: ""
	I0920 22:23:24.840836   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.840847   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:24.840854   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:24.840910   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:24.877583   65164 cri.go:89] found id: ""
	I0920 22:23:24.877609   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.877617   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:24.877623   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:24.877677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:24.913485   65164 cri.go:89] found id: ""
	I0920 22:23:24.913518   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.913529   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:24.913536   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:24.913594   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:24.949441   65164 cri.go:89] found id: ""
	I0920 22:23:24.949466   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.949475   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:24.949481   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:24.949527   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:24.985355   65164 cri.go:89] found id: ""
	I0920 22:23:24.985377   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.985386   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:24.985391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:24.985442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:25.021343   65164 cri.go:89] found id: ""
	I0920 22:23:25.021367   65164 logs.go:276] 0 containers: []
	W0920 22:23:25.021375   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:25.021383   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:25.021393   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:25.063475   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:25.063509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:25.113397   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:25.113436   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:25.127946   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:25.127981   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:25.202313   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:25.202334   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:25.202349   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:23.431756   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:25.929621   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:24.778308   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.277297   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:25.865413   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.866883   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.784353   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:27.800363   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:27.800439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:27.861988   65164 cri.go:89] found id: ""
	I0920 22:23:27.862016   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.862026   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:27.862034   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:27.862093   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:27.925087   65164 cri.go:89] found id: ""
	I0920 22:23:27.925114   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.925126   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:27.925133   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:27.925194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:27.959757   65164 cri.go:89] found id: ""
	I0920 22:23:27.959785   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.959796   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:27.959804   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:27.959857   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:27.998853   65164 cri.go:89] found id: ""
	I0920 22:23:27.998881   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.998891   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:27.998898   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:27.998955   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:28.034316   65164 cri.go:89] found id: ""
	I0920 22:23:28.034338   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.034347   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:28.034352   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:28.034402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:28.068834   65164 cri.go:89] found id: ""
	I0920 22:23:28.068857   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.068865   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:28.068871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:28.068921   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:28.103599   65164 cri.go:89] found id: ""
	I0920 22:23:28.103631   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.103642   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:28.103649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:28.103713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:28.139211   65164 cri.go:89] found id: ""
	I0920 22:23:28.139235   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.139243   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:28.139250   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:28.139261   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:28.191788   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:28.191825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:28.208453   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:28.208487   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:28.280469   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:28.280488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:28.280500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:28.357234   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:28.357277   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:27.931829   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.430915   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:29.278181   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:31.777506   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.365534   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:32.366025   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:34.864764   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.903301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:30.916515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:30.916595   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:30.950403   65164 cri.go:89] found id: ""
	I0920 22:23:30.950427   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.950435   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:30.950442   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:30.950488   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:30.987361   65164 cri.go:89] found id: ""
	I0920 22:23:30.987383   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.987392   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:30.987398   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:30.987447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:31.025688   65164 cri.go:89] found id: ""
	I0920 22:23:31.025712   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.025720   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:31.025727   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:31.025789   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:31.064227   65164 cri.go:89] found id: ""
	I0920 22:23:31.064253   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.064261   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:31.064268   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:31.064313   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:31.100029   65164 cri.go:89] found id: ""
	I0920 22:23:31.100059   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.100067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:31.100073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:31.100118   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:31.134290   65164 cri.go:89] found id: ""
	I0920 22:23:31.134328   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.134339   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:31.134346   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:31.134404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:31.174453   65164 cri.go:89] found id: ""
	I0920 22:23:31.174483   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.174493   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:31.174500   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:31.174558   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:31.209768   65164 cri.go:89] found id: ""
	I0920 22:23:31.209792   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.209801   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:31.209812   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:31.209823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:31.288640   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:31.288674   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:31.329681   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:31.329707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:31.388046   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:31.388090   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:31.402872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:31.402900   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:31.476528   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:33.977657   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:33.991331   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:33.991407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:34.027932   65164 cri.go:89] found id: ""
	I0920 22:23:34.027960   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.027970   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:34.027979   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:34.028036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:34.070369   65164 cri.go:89] found id: ""
	I0920 22:23:34.070395   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.070403   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:34.070411   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:34.070471   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:34.107467   65164 cri.go:89] found id: ""
	I0920 22:23:34.107493   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.107501   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:34.107507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:34.107556   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:34.148177   65164 cri.go:89] found id: ""
	I0920 22:23:34.148203   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.148211   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:34.148217   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:34.148274   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:34.185721   65164 cri.go:89] found id: ""
	I0920 22:23:34.185751   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.185762   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:34.185775   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:34.185834   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:34.221089   65164 cri.go:89] found id: ""
	I0920 22:23:34.221110   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.221118   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:34.221124   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:34.221167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:34.256222   65164 cri.go:89] found id: ""
	I0920 22:23:34.256244   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.256252   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:34.256257   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:34.256305   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:34.291640   65164 cri.go:89] found id: ""
	I0920 22:23:34.291665   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.291676   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:34.291688   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:34.291702   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:34.342602   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:34.342642   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:34.356770   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:34.356798   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:34.425461   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:34.425488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:34.425503   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:34.505552   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:34.505591   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:32.932537   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:35.433994   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:33.778173   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:35.778427   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:37.778562   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:36.866421   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:39.364982   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:37.045098   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:37.058954   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:37.059047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:37.094828   65164 cri.go:89] found id: ""
	I0920 22:23:37.094857   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.094867   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:37.094875   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:37.094935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:37.133943   65164 cri.go:89] found id: ""
	I0920 22:23:37.133975   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.133985   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:37.133996   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:37.134056   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:37.171931   65164 cri.go:89] found id: ""
	I0920 22:23:37.171976   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.171988   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:37.171996   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:37.172064   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:37.209342   65164 cri.go:89] found id: ""
	I0920 22:23:37.209367   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.209375   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:37.209381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:37.209431   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:37.250832   65164 cri.go:89] found id: ""
	I0920 22:23:37.250866   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.250877   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:37.250885   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:37.250984   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:37.292507   65164 cri.go:89] found id: ""
	I0920 22:23:37.292533   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.292541   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:37.292547   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:37.292603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:37.326576   65164 cri.go:89] found id: ""
	I0920 22:23:37.326604   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.326612   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:37.326618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:37.326675   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:37.362983   65164 cri.go:89] found id: ""
	I0920 22:23:37.363010   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.363020   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:37.363030   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:37.363043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:37.418526   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:37.418564   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:37.434155   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:37.434180   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:37.521264   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:37.521292   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:37.521303   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:37.603628   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:37.603663   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:40.146295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:40.160861   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:40.160932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:40.194739   65164 cri.go:89] found id: ""
	I0920 22:23:40.194762   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.194769   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:40.194776   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:40.194836   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:40.230753   65164 cri.go:89] found id: ""
	I0920 22:23:40.230781   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.230792   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:40.230800   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:40.230860   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:40.268363   65164 cri.go:89] found id: ""
	I0920 22:23:40.268392   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.268403   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:40.268411   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:40.268468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:40.307882   65164 cri.go:89] found id: ""
	I0920 22:23:40.307916   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.307925   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:40.307932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:40.307993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:40.344334   65164 cri.go:89] found id: ""
	I0920 22:23:40.344367   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.344378   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:40.344385   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:40.344445   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:40.385113   65164 cri.go:89] found id: ""
	I0920 22:23:40.385146   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.385157   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:40.385168   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:40.385269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:40.422532   65164 cri.go:89] found id: ""
	I0920 22:23:40.422557   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.422568   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:40.422576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:40.422635   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:40.463425   65164 cri.go:89] found id: ""
	I0920 22:23:40.463451   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.463462   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:40.463472   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:40.463486   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:40.518314   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:40.518350   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:40.534689   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:40.534718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:40.610484   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:40.610509   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:40.610525   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:40.697040   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:40.697076   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:37.930996   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:40.429855   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:42.431458   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:40.279291   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:42.778158   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:41.366031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:43.367298   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:43.240502   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:43.254888   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:43.254989   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:43.294646   65164 cri.go:89] found id: ""
	I0920 22:23:43.294672   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.294686   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:43.294699   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:43.294763   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:43.330802   65164 cri.go:89] found id: ""
	I0920 22:23:43.330828   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.330839   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:43.330846   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:43.330911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:43.369668   65164 cri.go:89] found id: ""
	I0920 22:23:43.369692   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.369702   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:43.369710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:43.369780   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:43.409992   65164 cri.go:89] found id: ""
	I0920 22:23:43.410032   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.410045   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:43.410054   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:43.410116   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:43.451838   65164 cri.go:89] found id: ""
	I0920 22:23:43.451870   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.451882   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:43.451889   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:43.451945   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:43.489973   65164 cri.go:89] found id: ""
	I0920 22:23:43.490010   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.490021   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:43.490027   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:43.490087   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:43.529027   65164 cri.go:89] found id: ""
	I0920 22:23:43.529051   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.529060   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:43.529066   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:43.529121   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:43.567324   65164 cri.go:89] found id: ""
	I0920 22:23:43.567354   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.567366   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:43.567377   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:43.567390   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:43.618859   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:43.618899   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:43.632929   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:43.632970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:43.713084   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:43.713108   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:43.713123   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:43.795977   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:43.796018   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:44.931817   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:46.940596   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:44.778817   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:47.277419   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:45.866458   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:48.365654   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:46.337737   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:46.352031   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:46.352117   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:46.391292   65164 cri.go:89] found id: ""
	I0920 22:23:46.391316   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.391328   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:46.391335   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:46.391397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:46.432554   65164 cri.go:89] found id: ""
	I0920 22:23:46.432576   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.432584   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:46.432590   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:46.432669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:46.469609   65164 cri.go:89] found id: ""
	I0920 22:23:46.469638   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.469649   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:46.469657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:46.469718   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:46.509671   65164 cri.go:89] found id: ""
	I0920 22:23:46.509702   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.509714   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:46.509721   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:46.509791   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:46.549707   65164 cri.go:89] found id: ""
	I0920 22:23:46.549737   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.549747   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:46.549755   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:46.549825   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:46.591886   65164 cri.go:89] found id: ""
	I0920 22:23:46.591914   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.591924   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:46.591931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:46.591982   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:46.629358   65164 cri.go:89] found id: ""
	I0920 22:23:46.629384   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.629391   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:46.629397   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:46.629447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:46.664854   65164 cri.go:89] found id: ""
	I0920 22:23:46.664880   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.664888   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:46.664897   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:46.664908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:46.706941   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:46.706994   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:46.760617   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:46.760651   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:46.774526   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:46.774552   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:46.841947   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:46.841977   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:46.841995   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.423720   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:49.439375   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:49.439466   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:49.476032   65164 cri.go:89] found id: ""
	I0920 22:23:49.476056   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.476064   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:49.476070   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:49.476129   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:49.513099   65164 cri.go:89] found id: ""
	I0920 22:23:49.513128   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.513138   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:49.513144   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:49.513194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:49.551645   65164 cri.go:89] found id: ""
	I0920 22:23:49.551671   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.551682   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:49.551689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:49.551750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:49.592269   65164 cri.go:89] found id: ""
	I0920 22:23:49.592295   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.592305   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:49.592312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:49.592371   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:49.629934   65164 cri.go:89] found id: ""
	I0920 22:23:49.629963   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.629971   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:49.629978   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:49.630033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:49.668013   65164 cri.go:89] found id: ""
	I0920 22:23:49.668041   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.668051   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:49.668058   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:49.668119   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:49.706531   65164 cri.go:89] found id: ""
	I0920 22:23:49.706561   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.706572   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:49.706580   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:49.706644   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:49.744130   65164 cri.go:89] found id: ""
	I0920 22:23:49.744168   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.744180   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:49.744192   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:49.744212   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:49.814987   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:49.815015   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:49.815031   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.897468   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:49.897505   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:49.936778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:49.936804   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:49.990840   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:49.990884   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:49.430203   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:51.930280   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:49.777628   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.278385   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:50.864440   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.865694   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.505554   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:52.520975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:52.521043   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:52.565302   65164 cri.go:89] found id: ""
	I0920 22:23:52.565331   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.565340   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:52.565346   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:52.565404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:52.606315   65164 cri.go:89] found id: ""
	I0920 22:23:52.606342   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.606352   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:52.606359   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:52.606416   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:52.641533   65164 cri.go:89] found id: ""
	I0920 22:23:52.641560   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.641574   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:52.641581   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:52.641640   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:52.677372   65164 cri.go:89] found id: ""
	I0920 22:23:52.677400   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.677412   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:52.677419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:52.677479   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:52.714016   65164 cri.go:89] found id: ""
	I0920 22:23:52.714056   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.714067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:52.714075   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:52.714131   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:52.748194   65164 cri.go:89] found id: ""
	I0920 22:23:52.748224   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.748235   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:52.748243   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:52.748306   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:52.785329   65164 cri.go:89] found id: ""
	I0920 22:23:52.785356   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.785366   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:52.785373   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:52.785430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:52.818350   65164 cri.go:89] found id: ""
	I0920 22:23:52.818382   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.818401   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:52.818414   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:52.818429   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:52.862819   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:52.862854   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:52.914773   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:52.914811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:52.928560   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:52.928589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:53.005530   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:53.005553   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:53.005568   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:55.589457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:55.604545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:55.604634   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:55.643361   65164 cri.go:89] found id: ""
	I0920 22:23:55.643385   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.643394   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:55.643404   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:55.643458   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:55.678738   65164 cri.go:89] found id: ""
	I0920 22:23:55.678762   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.678770   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:55.678775   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:55.678820   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:55.714107   65164 cri.go:89] found id: ""
	I0920 22:23:55.714137   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.714153   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:55.714160   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:55.714221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:55.749093   65164 cri.go:89] found id: ""
	I0920 22:23:55.749118   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.749127   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:55.749135   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:55.749194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:55.786138   65164 cri.go:89] found id: ""
	I0920 22:23:55.786166   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.786177   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:55.786185   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:55.786243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:55.820406   65164 cri.go:89] found id: ""
	I0920 22:23:55.820431   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.820440   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:55.820446   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:55.820509   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:55.854792   65164 cri.go:89] found id: ""
	I0920 22:23:55.854820   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.854829   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:55.854834   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:55.854889   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:53.931782   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:56.430505   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:54.278800   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:56.779261   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:55.368778   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:57.865901   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:55.891926   65164 cri.go:89] found id: ""
	I0920 22:23:55.891957   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.891967   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:55.891977   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:55.891991   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:55.947642   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:55.947677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:55.961872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:55.961903   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:56.038004   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:56.038035   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:56.038049   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:56.129637   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:56.129682   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:58.670325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:58.685501   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:58.685564   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:58.728232   65164 cri.go:89] found id: ""
	I0920 22:23:58.728262   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.728274   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:58.728282   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:58.728339   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:58.762617   65164 cri.go:89] found id: ""
	I0920 22:23:58.762647   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.762657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:58.762665   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:58.762727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:58.799427   65164 cri.go:89] found id: ""
	I0920 22:23:58.799454   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.799464   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:58.799471   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:58.799533   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:58.840100   65164 cri.go:89] found id: ""
	I0920 22:23:58.840126   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.840133   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:58.840140   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:58.840202   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:58.879097   65164 cri.go:89] found id: ""
	I0920 22:23:58.879123   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.879131   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:58.879137   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:58.879185   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:58.914839   65164 cri.go:89] found id: ""
	I0920 22:23:58.914879   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.914889   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:58.914897   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:58.914954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:58.957890   65164 cri.go:89] found id: ""
	I0920 22:23:58.957917   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.957925   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:58.957931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:58.957992   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:58.996355   65164 cri.go:89] found id: ""
	I0920 22:23:58.996380   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.996390   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:58.996400   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:58.996419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:59.010139   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:59.010162   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:59.083543   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:59.083567   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:59.083584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:59.158430   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:59.158470   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:59.199488   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:59.199515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:58.430584   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:00.431120   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:59.278671   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:01.777131   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:00.366154   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:02.866892   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:01.755931   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:01.770610   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:01.770684   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:01.806498   65164 cri.go:89] found id: ""
	I0920 22:24:01.806532   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.806544   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:01.806552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:01.806611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:01.849461   65164 cri.go:89] found id: ""
	I0920 22:24:01.849495   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.849505   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:01.849512   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:01.849570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:01.886634   65164 cri.go:89] found id: ""
	I0920 22:24:01.886674   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.886686   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:01.886693   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:01.886756   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:01.921036   65164 cri.go:89] found id: ""
	I0920 22:24:01.921064   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.921072   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:01.921079   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:01.921138   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:01.959066   65164 cri.go:89] found id: ""
	I0920 22:24:01.959097   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.959109   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:01.959116   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:01.959182   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:01.997105   65164 cri.go:89] found id: ""
	I0920 22:24:01.997135   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.997146   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:01.997155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:01.997215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:02.039220   65164 cri.go:89] found id: ""
	I0920 22:24:02.039245   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.039254   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:02.039259   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:02.039318   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:02.080646   65164 cri.go:89] found id: ""
	I0920 22:24:02.080675   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.080684   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:02.080691   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:02.080704   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:02.132408   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:02.132442   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:02.147039   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:02.147065   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:02.226357   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:02.226383   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:02.226402   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:02.310492   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:02.310527   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:04.854325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:04.869319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:04.869386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:04.904951   65164 cri.go:89] found id: ""
	I0920 22:24:04.904981   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.904992   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:04.905001   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:04.905062   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:04.942283   65164 cri.go:89] found id: ""
	I0920 22:24:04.942307   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.942314   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:04.942319   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:04.942366   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:04.981814   65164 cri.go:89] found id: ""
	I0920 22:24:04.981841   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.981851   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:04.981858   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:04.981914   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:05.016282   65164 cri.go:89] found id: ""
	I0920 22:24:05.016307   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.016316   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:05.016321   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:05.016369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:05.059185   65164 cri.go:89] found id: ""
	I0920 22:24:05.059215   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.059226   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:05.059233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:05.059286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:05.097814   65164 cri.go:89] found id: ""
	I0920 22:24:05.097836   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.097845   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:05.097851   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:05.097904   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:05.133890   65164 cri.go:89] found id: ""
	I0920 22:24:05.133918   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.133926   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:05.133932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:05.133993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:05.170616   65164 cri.go:89] found id: ""
	I0920 22:24:05.170641   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.170649   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:05.170657   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:05.170668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:05.221889   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:05.221921   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:05.235836   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:05.235863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:05.312145   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:05.312172   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:05.312188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:05.397092   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:05.397128   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:02.930186   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:04.931380   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.429840   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:03.778224   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:06.278765   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:08.279756   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:05.365583   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.865287   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:09.865560   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.938892   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:07.954037   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:07.954103   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:07.992847   65164 cri.go:89] found id: ""
	I0920 22:24:07.992872   65164 logs.go:276] 0 containers: []
	W0920 22:24:07.992880   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:07.992887   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:07.992971   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:08.030102   65164 cri.go:89] found id: ""
	I0920 22:24:08.030136   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.030148   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:08.030155   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:08.030215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:08.071156   65164 cri.go:89] found id: ""
	I0920 22:24:08.071187   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.071202   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:08.071209   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:08.071264   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:08.111668   65164 cri.go:89] found id: ""
	I0920 22:24:08.111699   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.111712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:08.111719   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:08.111777   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:08.148289   65164 cri.go:89] found id: ""
	I0920 22:24:08.148322   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.148333   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:08.148340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:08.148402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:08.189154   65164 cri.go:89] found id: ""
	I0920 22:24:08.189176   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.189184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:08.189191   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:08.189242   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:08.225338   65164 cri.go:89] found id: ""
	I0920 22:24:08.225363   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.225374   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:08.225381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:08.225439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:08.274986   65164 cri.go:89] found id: ""
	I0920 22:24:08.275015   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.275026   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:08.275036   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:08.275052   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:08.376769   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:08.376801   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:08.376819   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:08.455971   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:08.456008   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:08.496996   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:08.497026   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:08.549634   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:08.549669   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:09.930508   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:12.430716   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:10.778077   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:13.278116   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:12.365884   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:14.866612   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:11.065317   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:11.079438   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:11.079521   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:11.113628   65164 cri.go:89] found id: ""
	I0920 22:24:11.113657   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.113673   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:11.113680   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:11.113741   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:11.147698   65164 cri.go:89] found id: ""
	I0920 22:24:11.147727   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.147738   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:11.147745   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:11.147810   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:11.185153   65164 cri.go:89] found id: ""
	I0920 22:24:11.185183   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.185194   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:11.185202   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:11.185261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:11.220575   65164 cri.go:89] found id: ""
	I0920 22:24:11.220602   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.220611   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:11.220619   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:11.220677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:11.257062   65164 cri.go:89] found id: ""
	I0920 22:24:11.257095   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.257107   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:11.257117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:11.257174   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:11.295945   65164 cri.go:89] found id: ""
	I0920 22:24:11.295972   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.295982   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:11.295987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:11.296037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:11.335049   65164 cri.go:89] found id: ""
	I0920 22:24:11.335072   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.335080   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:11.335085   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:11.335134   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:11.377047   65164 cri.go:89] found id: ""
	I0920 22:24:11.377074   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.377085   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:11.377094   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:11.377108   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:11.430338   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:11.430364   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:11.445225   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:11.445249   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:11.517988   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:11.518010   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:11.518024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:11.596776   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:11.596813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:14.140982   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:14.158445   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:14.158524   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:14.208478   65164 cri.go:89] found id: ""
	I0920 22:24:14.208508   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.208518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:14.208526   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:14.208585   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:14.266224   65164 cri.go:89] found id: ""
	I0920 22:24:14.266256   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.266268   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:14.266276   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:14.266334   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:14.309917   65164 cri.go:89] found id: ""
	I0920 22:24:14.309943   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.309954   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:14.309968   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:14.310026   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:14.348491   65164 cri.go:89] found id: ""
	I0920 22:24:14.348516   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.348523   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:14.348530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:14.348575   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:14.389170   65164 cri.go:89] found id: ""
	I0920 22:24:14.389197   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.389205   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:14.389212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:14.389261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:14.426006   65164 cri.go:89] found id: ""
	I0920 22:24:14.426034   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.426045   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:14.426052   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:14.426113   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:14.461496   65164 cri.go:89] found id: ""
	I0920 22:24:14.461519   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.461526   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:14.461532   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:14.461588   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:14.501424   65164 cri.go:89] found id: ""
	I0920 22:24:14.501457   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.501474   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:14.501486   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:14.501499   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:14.550920   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:14.550954   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:14.565293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:14.565322   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:14.638031   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:14.638059   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:14.638071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:14.715631   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:14.715697   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:14.430748   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:16.930571   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:15.777268   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.778385   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.365248   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:19.366168   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.256947   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:17.269939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:17.270009   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:17.305349   65164 cri.go:89] found id: ""
	I0920 22:24:17.305373   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.305384   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:17.305391   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:17.305453   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:17.347917   65164 cri.go:89] found id: ""
	I0920 22:24:17.347947   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.347959   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:17.347967   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:17.348024   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:17.383366   65164 cri.go:89] found id: ""
	I0920 22:24:17.383387   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.383396   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:17.383402   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:17.383462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:17.418055   65164 cri.go:89] found id: ""
	I0920 22:24:17.418079   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.418086   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:17.418092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:17.418156   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:17.457651   65164 cri.go:89] found id: ""
	I0920 22:24:17.457680   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.457691   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:17.457698   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:17.457758   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:17.492527   65164 cri.go:89] found id: ""
	I0920 22:24:17.492557   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.492568   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:17.492576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:17.492627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:17.528964   65164 cri.go:89] found id: ""
	I0920 22:24:17.528993   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.529004   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:17.529017   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:17.529080   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:17.565286   65164 cri.go:89] found id: ""
	I0920 22:24:17.565315   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.565326   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:17.565336   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:17.565351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:17.617255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:17.617290   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:17.631897   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:17.631925   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:17.700829   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:17.700851   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:17.700863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:17.782597   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:17.782633   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.327305   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:20.340641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:20.340731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:20.378129   65164 cri.go:89] found id: ""
	I0920 22:24:20.378156   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.378167   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:20.378174   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:20.378228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:20.414202   65164 cri.go:89] found id: ""
	I0920 22:24:20.414230   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.414240   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:20.414247   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:20.414307   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:20.451599   65164 cri.go:89] found id: ""
	I0920 22:24:20.451631   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.451641   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:20.451649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:20.451708   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:20.486998   65164 cri.go:89] found id: ""
	I0920 22:24:20.487026   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.487034   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:20.487040   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:20.487091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:20.525623   65164 cri.go:89] found id: ""
	I0920 22:24:20.525651   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.525661   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:20.525669   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:20.525727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:20.560930   65164 cri.go:89] found id: ""
	I0920 22:24:20.560956   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.560967   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:20.560975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:20.561037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:20.598358   65164 cri.go:89] found id: ""
	I0920 22:24:20.598385   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.598396   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:20.598403   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:20.598468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:20.633645   65164 cri.go:89] found id: ""
	I0920 22:24:20.633671   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.633683   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:20.633694   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:20.633707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.678847   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:20.678880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:20.731758   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:20.731803   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:20.747298   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:20.747328   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:20.822433   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:20.822459   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:20.822473   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:18.932797   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:21.430477   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:19.778546   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:22.277786   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:21.866777   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:24.365161   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:23.401037   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:23.414055   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:23.414135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:23.452483   65164 cri.go:89] found id: ""
	I0920 22:24:23.452511   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.452522   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:23.452529   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:23.452587   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:23.491951   65164 cri.go:89] found id: ""
	I0920 22:24:23.491979   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.491987   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:23.491993   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:23.492039   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:23.533381   65164 cri.go:89] found id: ""
	I0920 22:24:23.533413   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.533424   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:23.533432   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:23.533492   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:23.572442   65164 cri.go:89] found id: ""
	I0920 22:24:23.572475   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.572485   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:23.572493   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:23.572554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:23.609161   65164 cri.go:89] found id: ""
	I0920 22:24:23.609189   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.609200   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:23.609207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:23.609272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:23.645692   65164 cri.go:89] found id: ""
	I0920 22:24:23.645721   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.645730   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:23.645737   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:23.645824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:23.681332   65164 cri.go:89] found id: ""
	I0920 22:24:23.681355   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.681363   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:23.681368   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:23.681414   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:23.716497   65164 cri.go:89] found id: ""
	I0920 22:24:23.716524   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.716532   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:23.716541   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:23.716551   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:23.794446   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:23.794485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:23.839598   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:23.839625   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:23.893259   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:23.893295   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:23.907430   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:23.907456   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:23.982193   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:23.431805   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:25.430647   64543 pod_ready.go:82] duration metric: took 4m0.006693945s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	E0920 22:24:25.430668   64543 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 22:24:25.430675   64543 pod_ready.go:39] duration metric: took 4m7.427029914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:24:25.430689   64543 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:24:25.430712   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:25.430753   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:25.478295   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:25.478324   64543 cri.go:89] found id: ""
	I0920 22:24:25.478334   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:25.478395   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.483530   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:25.483600   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:25.525758   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:25.525791   64543 cri.go:89] found id: ""
	I0920 22:24:25.525801   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:25.525855   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.530200   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:25.530272   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:25.581889   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:25.581911   64543 cri.go:89] found id: ""
	I0920 22:24:25.581920   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:25.581976   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.587886   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:25.587948   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:25.634412   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:25.634438   64543 cri.go:89] found id: ""
	I0920 22:24:25.634448   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:25.634508   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.638844   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:25.638907   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:25.674015   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:25.674040   64543 cri.go:89] found id: ""
	I0920 22:24:25.674049   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:25.674109   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.678865   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:25.678935   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:25.715353   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:25.715380   64543 cri.go:89] found id: ""
	I0920 22:24:25.715390   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:25.715445   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.719967   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:25.720038   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:25.759899   64543 cri.go:89] found id: ""
	I0920 22:24:25.759925   64543 logs.go:276] 0 containers: []
	W0920 22:24:25.759933   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:25.759938   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:25.759996   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:25.805111   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:25.805139   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:25.805146   64543 cri.go:89] found id: ""
	I0920 22:24:25.805154   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:25.805225   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.809639   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.813697   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:25.813722   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:25.853582   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:25.853610   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:25.893403   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:25.893450   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:25.939559   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:25.939601   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:25.957288   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:25.957322   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:26.007398   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:26.007427   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:26.048801   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:26.048828   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:26.105292   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:26.105327   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:26.141333   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:26.141367   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:26.659413   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:26.659458   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:26.719990   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:26.720018   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:26.797434   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:26.797480   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:26.956602   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:26.956645   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:24.278169   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.278275   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:28.278844   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.365203   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:28.366087   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.482877   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:26.497511   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:26.497590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:26.534711   65164 cri.go:89] found id: ""
	I0920 22:24:26.534732   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.534739   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:26.534745   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:26.534790   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:26.569664   65164 cri.go:89] found id: ""
	I0920 22:24:26.569696   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.569706   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:26.569714   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:26.569772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:26.610623   65164 cri.go:89] found id: ""
	I0920 22:24:26.610651   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.610663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:26.610671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:26.610726   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:26.648337   65164 cri.go:89] found id: ""
	I0920 22:24:26.648368   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.648379   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:26.648387   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:26.648446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:26.687633   65164 cri.go:89] found id: ""
	I0920 22:24:26.687661   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.687673   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:26.687680   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:26.687752   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:26.724361   65164 cri.go:89] found id: ""
	I0920 22:24:26.724390   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.724410   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:26.724419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:26.724475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:26.766101   65164 cri.go:89] found id: ""
	I0920 22:24:26.766127   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.766137   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:26.766144   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:26.766196   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:26.805844   65164 cri.go:89] found id: ""
	I0920 22:24:26.805872   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.805883   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:26.805894   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:26.805907   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:26.880703   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:26.880725   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:26.880740   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:26.967599   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:26.967635   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:27.017359   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:27.017384   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:27.071335   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:27.071375   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:29.588374   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:29.602233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:29.602308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:29.639528   65164 cri.go:89] found id: ""
	I0920 22:24:29.639551   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.639562   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:29.639570   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:29.639626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:29.677766   65164 cri.go:89] found id: ""
	I0920 22:24:29.677794   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.677810   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:29.677817   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:29.677874   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:29.714616   65164 cri.go:89] found id: ""
	I0920 22:24:29.714652   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.714663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:29.714671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:29.714722   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:29.750760   65164 cri.go:89] found id: ""
	I0920 22:24:29.750787   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.750795   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:29.750802   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:29.750862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:29.791014   65164 cri.go:89] found id: ""
	I0920 22:24:29.791046   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.791057   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:29.791064   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:29.791135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:29.827114   65164 cri.go:89] found id: ""
	I0920 22:24:29.827146   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.827158   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:29.827166   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:29.827221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:29.866063   65164 cri.go:89] found id: ""
	I0920 22:24:29.866087   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.866097   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:29.866105   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:29.866159   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:29.905616   65164 cri.go:89] found id: ""
	I0920 22:24:29.905646   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.905657   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:29.905668   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:29.905681   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:29.990749   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:29.990787   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:30.030931   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:30.030975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:30.084404   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:30.084440   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:30.100878   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:30.100908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:30.175423   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:29.517258   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:29.535431   64543 api_server.go:72] duration metric: took 4m18.790198719s to wait for apiserver process to appear ...
	I0920 22:24:29.535457   64543 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:24:29.535493   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:29.535549   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:29.581802   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:29.581828   64543 cri.go:89] found id: ""
	I0920 22:24:29.581836   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:29.581890   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.590758   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:29.590827   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:29.636548   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:29.636574   64543 cri.go:89] found id: ""
	I0920 22:24:29.636583   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:29.636644   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.641309   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:29.641378   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:29.686989   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:29.687017   64543 cri.go:89] found id: ""
	I0920 22:24:29.687026   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:29.687086   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.691971   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:29.692056   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:29.733852   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:29.733880   64543 cri.go:89] found id: ""
	I0920 22:24:29.733890   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:29.733954   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.738246   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:29.738322   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:29.786495   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:29.786520   64543 cri.go:89] found id: ""
	I0920 22:24:29.786527   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:29.786580   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.791112   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:29.791170   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:29.836075   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:29.836098   64543 cri.go:89] found id: ""
	I0920 22:24:29.836106   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:29.836150   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.840577   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:29.840652   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:29.882111   64543 cri.go:89] found id: ""
	I0920 22:24:29.882142   64543 logs.go:276] 0 containers: []
	W0920 22:24:29.882154   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:29.882161   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:29.882247   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:29.926138   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:29.926163   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:29.926169   64543 cri.go:89] found id: ""
	I0920 22:24:29.926177   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:29.926235   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.930772   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.934923   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:29.934952   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:30.052134   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:30.052171   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:30.101772   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:30.101795   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:30.143728   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:30.143760   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:30.189176   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:30.189204   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:30.246393   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:30.246423   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:30.289832   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:30.289864   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:30.712613   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:30.712648   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:30.783930   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:30.783962   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:30.838078   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:30.838112   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:30.889200   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:30.889235   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:30.924455   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:30.924492   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:30.961916   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:30.961945   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:30.278927   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.777947   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:30.867260   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.868094   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.675815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:32.689449   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:32.689517   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:32.725045   65164 cri.go:89] found id: ""
	I0920 22:24:32.725072   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.725080   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:32.725086   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:32.725139   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:32.760529   65164 cri.go:89] found id: ""
	I0920 22:24:32.760555   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.760566   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:32.760572   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:32.760629   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:32.795500   65164 cri.go:89] found id: ""
	I0920 22:24:32.795526   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.795536   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:32.795543   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:32.795604   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:32.834854   65164 cri.go:89] found id: ""
	I0920 22:24:32.834884   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.834895   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:32.834904   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:32.834972   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:32.875928   65164 cri.go:89] found id: ""
	I0920 22:24:32.875956   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.875967   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:32.875975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:32.876038   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:32.910115   65164 cri.go:89] found id: ""
	I0920 22:24:32.910140   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.910149   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:32.910155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:32.910204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:32.944790   65164 cri.go:89] found id: ""
	I0920 22:24:32.944817   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.944828   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:32.944835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:32.944896   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:32.980012   65164 cri.go:89] found id: ""
	I0920 22:24:32.980043   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.980054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:32.980064   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:32.980077   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:33.032827   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:33.032864   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:33.047478   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:33.047507   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:33.124253   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:33.124280   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:33.124294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:33.200625   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:33.200666   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:35.740556   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:35.755559   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:35.755626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:35.796164   65164 cri.go:89] found id: ""
	I0920 22:24:35.796191   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.796200   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:35.796211   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:35.796260   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:35.832565   65164 cri.go:89] found id: ""
	I0920 22:24:35.832591   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.832601   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:35.832609   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:35.832678   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:33.481353   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:24:33.486802   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 200:
	ok
	I0920 22:24:33.487816   64543 api_server.go:141] control plane version: v1.31.1
	I0920 22:24:33.487835   64543 api_server.go:131] duration metric: took 3.952371649s to wait for apiserver health ...
	I0920 22:24:33.487842   64543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:24:33.487865   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:33.487917   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:33.528958   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:33.528989   64543 cri.go:89] found id: ""
	I0920 22:24:33.528999   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:33.529061   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.533727   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:33.533802   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:33.573912   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:33.573937   64543 cri.go:89] found id: ""
	I0920 22:24:33.573947   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:33.574016   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.578918   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:33.578998   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:33.628384   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:33.628416   64543 cri.go:89] found id: ""
	I0920 22:24:33.628427   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:33.628490   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.632875   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:33.632955   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:33.674561   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:33.674588   64543 cri.go:89] found id: ""
	I0920 22:24:33.674597   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:33.674654   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.679984   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:33.680055   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:33.715881   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:33.715906   64543 cri.go:89] found id: ""
	I0920 22:24:33.715914   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:33.715963   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.720339   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:33.720408   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:33.764661   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:33.764680   64543 cri.go:89] found id: ""
	I0920 22:24:33.764687   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:33.764734   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.769479   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:33.769533   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:33.810893   64543 cri.go:89] found id: ""
	I0920 22:24:33.810920   64543 logs.go:276] 0 containers: []
	W0920 22:24:33.810932   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:33.810950   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:33.811023   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:33.848882   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:33.848904   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:33.848908   64543 cri.go:89] found id: ""
	I0920 22:24:33.848914   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:33.848964   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.853597   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.858054   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:33.858074   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:33.908602   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:33.908636   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:33.953823   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:33.953850   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:33.993841   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:33.993868   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:34.009784   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:34.009808   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:34.134468   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:34.134496   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:34.178737   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:34.178772   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:34.218475   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:34.218519   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:34.261246   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:34.261272   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:34.324980   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:34.325018   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:34.364223   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:34.364249   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:34.730213   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:34.730262   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:34.808747   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:34.808786   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:37.357880   64543 system_pods.go:59] 8 kube-system pods found
	I0920 22:24:37.357914   64543 system_pods.go:61] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running
	I0920 22:24:37.357919   64543 system_pods.go:61] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running
	I0920 22:24:37.357923   64543 system_pods.go:61] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running
	I0920 22:24:37.357928   64543 system_pods.go:61] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running
	I0920 22:24:37.357933   64543 system_pods.go:61] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:24:37.357938   64543 system_pods.go:61] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running
	I0920 22:24:37.357946   64543 system_pods.go:61] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:24:37.357960   64543 system_pods.go:61] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:24:37.357969   64543 system_pods.go:74] duration metric: took 3.870121418s to wait for pod list to return data ...
	I0920 22:24:37.357979   64543 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:24:37.361128   64543 default_sa.go:45] found service account: "default"
	I0920 22:24:37.361152   64543 default_sa.go:55] duration metric: took 3.165479ms for default service account to be created ...
	I0920 22:24:37.361160   64543 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:24:37.366724   64543 system_pods.go:86] 8 kube-system pods found
	I0920 22:24:37.366755   64543 system_pods.go:89] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running
	I0920 22:24:37.366764   64543 system_pods.go:89] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running
	I0920 22:24:37.366771   64543 system_pods.go:89] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running
	I0920 22:24:37.366777   64543 system_pods.go:89] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running
	I0920 22:24:37.366781   64543 system_pods.go:89] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:24:37.366785   64543 system_pods.go:89] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running
	I0920 22:24:37.366791   64543 system_pods.go:89] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:24:37.366795   64543 system_pods.go:89] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:24:37.366803   64543 system_pods.go:126] duration metric: took 5.637987ms to wait for k8s-apps to be running ...
	I0920 22:24:37.366809   64543 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:24:37.366865   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:24:37.385292   64543 system_svc.go:56] duration metric: took 18.47174ms WaitForService to wait for kubelet
	I0920 22:24:37.385326   64543 kubeadm.go:582] duration metric: took 4m26.64009895s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:24:37.385352   64543 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:24:37.388635   64543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:24:37.388661   64543 node_conditions.go:123] node cpu capacity is 2
	I0920 22:24:37.388675   64543 node_conditions.go:105] duration metric: took 3.31714ms to run NodePressure ...
	I0920 22:24:37.388690   64543 start.go:241] waiting for startup goroutines ...
	I0920 22:24:37.388699   64543 start.go:246] waiting for cluster config update ...
	I0920 22:24:37.388717   64543 start.go:255] writing updated cluster config ...
	I0920 22:24:37.389073   64543 ssh_runner.go:195] Run: rm -f paused
	I0920 22:24:37.442347   64543 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:24:37.443855   64543 out.go:177] * Done! kubectl is now configured to use "embed-certs-562750" cluster and "default" namespace by default
	I0920 22:24:34.779041   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:36.779446   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:35.365819   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:37.865951   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:35.868115   65164 cri.go:89] found id: ""
	I0920 22:24:35.868138   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.868145   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:35.868152   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:35.868199   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:35.902417   65164 cri.go:89] found id: ""
	I0920 22:24:35.902447   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.902457   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:35.902465   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:35.902531   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:35.939195   65164 cri.go:89] found id: ""
	I0920 22:24:35.939226   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.939242   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:35.939256   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:35.939317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:35.981364   65164 cri.go:89] found id: ""
	I0920 22:24:35.981387   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.981397   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:35.981405   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:35.981462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:36.019938   65164 cri.go:89] found id: ""
	I0920 22:24:36.019968   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.019979   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:36.019987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:36.020049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:36.058764   65164 cri.go:89] found id: ""
	I0920 22:24:36.058788   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.058795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:36.058803   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:36.058812   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:36.098442   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:36.098468   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:36.152150   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:36.152188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:36.168940   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:36.168972   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:36.242112   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:36.242134   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:36.242148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:38.843420   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:38.857665   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:38.857746   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:38.901473   65164 cri.go:89] found id: ""
	I0920 22:24:38.901507   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.901518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:38.901532   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:38.901591   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:38.937980   65164 cri.go:89] found id: ""
	I0920 22:24:38.938012   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.938022   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:38.938030   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:38.938085   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:38.973476   65164 cri.go:89] found id: ""
	I0920 22:24:38.973506   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.973514   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:38.973520   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:38.973579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:39.009144   65164 cri.go:89] found id: ""
	I0920 22:24:39.009177   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.009197   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:39.009205   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:39.009282   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:39.046471   65164 cri.go:89] found id: ""
	I0920 22:24:39.046495   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.046505   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:39.046513   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:39.046570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:39.080274   65164 cri.go:89] found id: ""
	I0920 22:24:39.080303   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.080315   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:39.080323   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:39.080381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:39.117267   65164 cri.go:89] found id: ""
	I0920 22:24:39.117297   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.117307   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:39.117314   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:39.117381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:39.158015   65164 cri.go:89] found id: ""
	I0920 22:24:39.158043   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.158054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:39.158065   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:39.158079   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:39.210324   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:39.210360   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:39.226827   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:39.226857   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:39.299060   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:39.299088   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:39.299157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:39.386377   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:39.386420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:39.277971   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:41.278082   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:43.278136   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:40.365418   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:41.359063   64930 pod_ready.go:82] duration metric: took 4m0.000372202s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" ...
	E0920 22:24:41.359104   64930 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 22:24:41.359120   64930 pod_ready.go:39] duration metric: took 4m12.013463696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:24:41.359144   64930 kubeadm.go:597] duration metric: took 4m19.989072639s to restartPrimaryControlPlane
	W0920 22:24:41.359208   64930 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:24:41.359236   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:24:41.925762   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:41.939840   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:41.939911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:41.979910   65164 cri.go:89] found id: ""
	I0920 22:24:41.979943   65164 logs.go:276] 0 containers: []
	W0920 22:24:41.979964   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:41.979973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:41.980036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:42.018711   65164 cri.go:89] found id: ""
	I0920 22:24:42.018738   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.018747   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:42.018752   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:42.018796   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:42.056718   65164 cri.go:89] found id: ""
	I0920 22:24:42.056741   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.056749   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:42.056756   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:42.056826   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:42.093609   65164 cri.go:89] found id: ""
	I0920 22:24:42.093639   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.093650   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:42.093657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:42.093729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:42.130129   65164 cri.go:89] found id: ""
	I0920 22:24:42.130157   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.130167   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:42.130175   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:42.130233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:42.166747   65164 cri.go:89] found id: ""
	I0920 22:24:42.166796   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.166807   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:42.166815   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:42.166875   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:42.213926   65164 cri.go:89] found id: ""
	I0920 22:24:42.213960   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.213971   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:42.213982   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:42.214047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:42.251645   65164 cri.go:89] found id: ""
	I0920 22:24:42.251671   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.251681   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:42.251693   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:42.251707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:42.265133   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:42.265165   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:42.340018   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:42.340044   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:42.340058   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:42.435006   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:42.435056   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:42.478991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:42.479022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:45.032066   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:45.048658   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:45.048734   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:45.086708   65164 cri.go:89] found id: ""
	I0920 22:24:45.086733   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.086743   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:45.086750   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:45.086815   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:45.124311   65164 cri.go:89] found id: ""
	I0920 22:24:45.124342   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.124353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:45.124360   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:45.124407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:45.164253   65164 cri.go:89] found id: ""
	I0920 22:24:45.164283   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.164293   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:45.164301   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:45.164364   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:45.206382   65164 cri.go:89] found id: ""
	I0920 22:24:45.206412   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.206421   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:45.206429   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:45.206494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:45.241949   65164 cri.go:89] found id: ""
	I0920 22:24:45.241977   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.241986   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:45.241991   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:45.242052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:45.283718   65164 cri.go:89] found id: ""
	I0920 22:24:45.283743   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.283752   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:45.283757   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:45.283812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:45.331523   65164 cri.go:89] found id: ""
	I0920 22:24:45.331564   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.331576   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:45.331585   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:45.331672   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:45.372428   65164 cri.go:89] found id: ""
	I0920 22:24:45.372459   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.372469   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:45.372479   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:45.372496   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:45.387751   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:45.387786   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:45.470354   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:45.470377   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:45.470392   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:45.557865   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:45.557902   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:45.600123   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:45.600153   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:45.278256   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:47.278977   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:48.153277   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:48.167796   65164 kubeadm.go:597] duration metric: took 4m4.375477562s to restartPrimaryControlPlane
	W0920 22:24:48.167875   65164 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:24:48.167906   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:24:48.626343   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:24:48.644488   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:24:48.655891   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:24:48.666885   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:24:48.666910   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:24:48.666985   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:24:48.676979   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:24:48.677048   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:24:48.687890   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:24:48.697890   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:24:48.697955   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:24:48.708412   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.719024   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:24:48.719093   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.729321   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:24:48.739258   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:24:48.739329   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:24:48.749787   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:24:48.825173   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:24:48.825258   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:24:48.982428   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:24:48.982587   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:24:48.982721   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:24:49.194912   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:24:49.196899   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:24:49.196999   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:24:49.197078   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:24:49.197222   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:24:49.197328   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:24:49.197421   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:24:49.197495   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:24:49.197574   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:24:49.197662   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:24:49.197761   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:24:49.197887   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:24:49.197948   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:24:49.198036   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:24:49.468998   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:24:49.613979   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:24:49.868901   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:24:50.071143   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:24:50.093179   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:24:50.094030   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:24:50.094086   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:24:50.232993   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:24:50.234761   65164 out.go:235]   - Booting up control plane ...
	I0920 22:24:50.234871   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:24:50.247358   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:24:50.248714   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:24:50.249690   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:24:50.252874   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:24:49.778284   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:52.277872   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:54.278087   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:56.278439   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:58.777573   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:00.777673   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:02.777879   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.503130   64930 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.143868713s)
	I0920 22:25:07.503224   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:07.519558   64930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:25:07.530384   64930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:25:07.540441   64930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:25:07.540469   64930 kubeadm.go:157] found existing configuration files:
	
	I0920 22:25:07.540521   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 22:25:07.550627   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:25:07.550694   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:25:07.560193   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 22:25:07.569908   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:25:07.569966   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:25:07.580326   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 22:25:07.592893   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:25:07.592969   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:25:07.603183   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 22:25:07.613494   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:25:07.613558   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:25:07.623271   64930 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:25:07.671307   64930 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:25:07.671611   64930 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:25:07.801578   64930 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:25:07.801721   64930 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:25:07.801867   64930 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:25:07.811576   64930 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:25:04.778293   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.278844   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.814524   64930 out.go:235]   - Generating certificates and keys ...
	I0920 22:25:07.814630   64930 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:25:07.814714   64930 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:25:07.814828   64930 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:25:07.814911   64930 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:25:07.815048   64930 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:25:07.815155   64930 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:25:07.815261   64930 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:25:07.815374   64930 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:25:07.815473   64930 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:25:07.815580   64930 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:25:07.815640   64930 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:25:07.815728   64930 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:25:08.129976   64930 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:25:08.305286   64930 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:25:08.489838   64930 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:25:08.656531   64930 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:25:08.909653   64930 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:25:08.910197   64930 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:25:08.913016   64930 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:25:08.914949   64930 out.go:235]   - Booting up control plane ...
	I0920 22:25:08.915080   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:25:08.915160   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:25:08.915571   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:25:08.935196   64930 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:25:08.941127   64930 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:25:08.941188   64930 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:25:09.077238   64930 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:25:09.077420   64930 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:25:09.580067   64930 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.879816ms
	I0920 22:25:09.580155   64930 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:25:09.779112   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:11.779164   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:14.582062   64930 kubeadm.go:310] [api-check] The API server is healthy after 5.001748542s
	I0920 22:25:14.600094   64930 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:25:14.622724   64930 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:25:14.657734   64930 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:25:14.657985   64930 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-799783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:25:14.673021   64930 kubeadm.go:310] [bootstrap-token] Using token: xygl58.yjxbp1o582mz9yse
	I0920 22:25:14.675015   64930 out.go:235]   - Configuring RBAC rules ...
	I0920 22:25:14.675149   64930 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:25:14.682527   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:25:14.696372   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:25:14.700087   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:25:14.703910   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:25:14.707739   64930 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:25:14.991815   64930 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:25:15.451193   64930 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:25:15.990452   64930 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:25:15.990476   64930 kubeadm.go:310] 
	I0920 22:25:15.990551   64930 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:25:15.990561   64930 kubeadm.go:310] 
	I0920 22:25:15.990692   64930 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:25:15.990720   64930 kubeadm.go:310] 
	I0920 22:25:15.990765   64930 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:25:15.990873   64930 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:25:15.990983   64930 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:25:15.990998   64930 kubeadm.go:310] 
	I0920 22:25:15.991082   64930 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:25:15.991091   64930 kubeadm.go:310] 
	I0920 22:25:15.991157   64930 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:25:15.991169   64930 kubeadm.go:310] 
	I0920 22:25:15.991248   64930 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:25:15.991352   64930 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:25:15.991458   64930 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:25:15.991469   64930 kubeadm.go:310] 
	I0920 22:25:15.991587   64930 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:25:15.991695   64930 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:25:15.991705   64930 kubeadm.go:310] 
	I0920 22:25:15.991828   64930 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token xygl58.yjxbp1o582mz9yse \
	I0920 22:25:15.991969   64930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 22:25:15.991996   64930 kubeadm.go:310] 	--control-plane 
	I0920 22:25:15.992004   64930 kubeadm.go:310] 
	I0920 22:25:15.992116   64930 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:25:15.992128   64930 kubeadm.go:310] 
	I0920 22:25:15.992237   64930 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token xygl58.yjxbp1o582mz9yse \
	I0920 22:25:15.992374   64930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 22:25:15.993135   64930 kubeadm.go:310] W0920 22:25:07.638229    2519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:25:15.993525   64930 kubeadm.go:310] W0920 22:25:07.639053    2519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:25:15.993662   64930 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:25:15.993697   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:25:15.993707   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:25:15.995878   64930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:25:13.779628   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:16.279390   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:15.997224   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:25:16.009209   64930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:25:16.056563   64930 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:25:16.056636   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:16.056650   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-799783 minikube.k8s.io/updated_at=2024_09_20T22_25_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=default-k8s-diff-port-799783 minikube.k8s.io/primary=true
	I0920 22:25:16.306668   64930 ops.go:34] apiserver oom_adj: -16
	I0920 22:25:16.306700   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:16.807170   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:17.306777   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:17.807844   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:18.307336   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:18.807397   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:19.307726   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:19.807086   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:20.306789   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:20.440925   64930 kubeadm.go:1113] duration metric: took 4.384353815s to wait for elevateKubeSystemPrivileges
	I0920 22:25:20.440962   64930 kubeadm.go:394] duration metric: took 4m59.125193253s to StartCluster
	I0920 22:25:20.440983   64930 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:25:20.441088   64930 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:25:20.442725   64930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:25:20.443037   64930 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:25:20.443113   64930 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:25:20.443191   64930 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443209   64930 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-799783"
	W0920 22:25:20.443217   64930 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:25:20.443243   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.443268   64930 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:25:20.443324   64930 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443344   64930 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-799783"
	I0920 22:25:20.443654   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.443690   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.443738   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.443746   64930 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443766   64930 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-799783"
	I0920 22:25:20.443776   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0920 22:25:20.443784   64930 addons.go:243] addon metrics-server should already be in state true
	I0920 22:25:20.443814   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.444185   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.444234   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.445296   64930 out.go:177] * Verifying Kubernetes components...
	I0920 22:25:20.447251   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:25:20.460387   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36667
	I0920 22:25:20.460405   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40197
	I0920 22:25:20.460831   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.460845   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.461346   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.461369   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.461486   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.461511   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.461774   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.461877   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.462078   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.462361   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.462402   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.463069   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0920 22:25:20.463410   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.463880   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.463900   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.465938   64930 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-799783"
	W0920 22:25:20.465969   64930 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:25:20.465998   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.466384   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.466422   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.467044   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.467706   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.467742   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.482478   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I0920 22:25:20.483079   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.483326   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0920 22:25:20.483597   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.483621   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.483758   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.483921   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.484249   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.484269   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.484562   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.484633   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.484683   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.484899   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.487182   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.489324   64930 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:25:20.489534   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38485
	I0920 22:25:20.489908   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.490596   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.490619   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.490892   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.491065   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.491195   64930 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:25:20.491216   64930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:25:20.491235   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.493798   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.494931   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.495325   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.495346   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.495482   64930 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:25:20.495649   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.495816   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.495967   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.496091   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.496876   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:25:20.496899   64930 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:25:20.496919   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.500143   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.500691   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.500715   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.500900   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.501086   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.501243   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.501378   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.502515   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34479
	I0920 22:25:20.502834   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.503415   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.503435   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.503858   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.504115   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.506006   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.506217   64930 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:25:20.506234   64930 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:25:20.506253   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.509565   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.510128   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.510175   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.510373   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.510558   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.510702   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.510837   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.644780   64930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:25:20.662529   64930 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-799783" to be "Ready" ...
	I0920 22:25:20.717900   64930 node_ready.go:49] node "default-k8s-diff-port-799783" has status "Ready":"True"
	I0920 22:25:20.717930   64930 node_ready.go:38] duration metric: took 55.371957ms for node "default-k8s-diff-port-799783" to be "Ready" ...
	I0920 22:25:20.717959   64930 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:20.751849   64930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:20.793517   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:25:20.827184   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:25:20.827210   64930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:25:20.830914   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:25:20.882548   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:25:20.882577   64930 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:25:20.942208   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:25:20.942235   64930 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:25:20.998935   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:25:21.567298   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567313   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567333   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567338   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567602   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.567648   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.567677   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.567694   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567705   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567921   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.567947   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.567952   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569272   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.569294   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569344   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.569361   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.569606   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.569641   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569641   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.589899   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.589919   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.590252   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.590267   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.590273   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.842284   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.842313   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.842673   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.842674   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.842703   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.842716   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.842726   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.843052   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.843086   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.843100   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.843578   64930 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-799783"
	I0920 22:25:21.847202   64930 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 22:25:18.778824   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:20.779763   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:23.279489   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:21.849188   64930 addons.go:510] duration metric: took 1.406076182s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 22:25:22.761019   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:23.758480   64930 pod_ready.go:93] pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:23.758502   64930 pod_ready.go:82] duration metric: took 3.006618308s for pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:23.758514   64930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:25.776826   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:27.777926   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:25.766257   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:28.265538   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:29.784175   64930 pod_ready.go:93] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.784202   64930 pod_ready.go:82] duration metric: took 6.02568215s for pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.784213   64930 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.790351   64930 pod_ready.go:93] pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.790375   64930 pod_ready.go:82] duration metric: took 6.155652ms for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.790383   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.796400   64930 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.796423   64930 pod_ready.go:82] duration metric: took 6.034163ms for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.796432   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.801470   64930 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.801495   64930 pod_ready.go:82] duration metric: took 5.056014ms for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.801504   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h7j48" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.806664   64930 pod_ready.go:93] pod "kube-proxy-h7j48" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.806683   64930 pod_ready.go:82] duration metric: took 5.17339ms for pod "kube-proxy-h7j48" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.806691   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:30.163945   64930 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:30.163973   64930 pod_ready.go:82] duration metric: took 357.276064ms for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:30.163981   64930 pod_ready.go:39] duration metric: took 9.446007204s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:30.163995   64930 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:25:30.164067   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:25:30.180030   64930 api_server.go:72] duration metric: took 9.736951732s to wait for apiserver process to appear ...
	I0920 22:25:30.180063   64930 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:25:30.180084   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:25:30.184582   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 200:
	ok
	I0920 22:25:30.185462   64930 api_server.go:141] control plane version: v1.31.1
	I0920 22:25:30.185482   64930 api_server.go:131] duration metric: took 5.41308ms to wait for apiserver health ...
	I0920 22:25:30.185490   64930 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:25:30.254069   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:25:30.255250   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:30.255483   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:30.366383   64930 system_pods.go:59] 9 kube-system pods found
	I0920 22:25:30.366415   64930 system_pods.go:61] "coredns-7c65d6cfc9-dpwmm" [6bd8b8dc-b176-4d2a-affc-63e02eb9fba3] Running
	I0920 22:25:30.366421   64930 system_pods.go:61] "coredns-7c65d6cfc9-lgvzf" [58852f70-36e8-4972-8a6f-b04d8fc9d9f3] Running
	I0920 22:25:30.366425   64930 system_pods.go:61] "etcd-default-k8s-diff-port-799783" [50316bfb-5f45-4685-a331-896f9ecbf691] Running
	I0920 22:25:30.366429   64930 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799783" [f5148101-179c-45e3-a7fc-b4b8a95c8b13] Running
	I0920 22:25:30.366433   64930 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799783" [c4c595bb-40bd-4ad6-b121-43f539fc9d76] Running
	I0920 22:25:30.366436   64930 system_pods.go:61] "kube-proxy-h7j48" [0fe6b074-a959-4f74-a981-e2303126c3ab] Running
	I0920 22:25:30.366439   64930 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799783" [369b18a4-6c12-403a-889d-4365264a29bf] Running
	I0920 22:25:30.366444   64930 system_pods.go:61] "metrics-server-6867b74b74-pjcvw" [0b1b708c-664e-4d6c-8da2-79941793d8d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:25:30.366448   64930 system_pods.go:61] "storage-provisioner" [6149617c-dd2c-4ac4-ad1b-80f4943d966f] Running
	I0920 22:25:30.366455   64930 system_pods.go:74] duration metric: took 180.959929ms to wait for pod list to return data ...
	I0920 22:25:30.366462   64930 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:25:30.563810   64930 default_sa.go:45] found service account: "default"
	I0920 22:25:30.563838   64930 default_sa.go:55] duration metric: took 197.369932ms for default service account to be created ...
	I0920 22:25:30.563848   64930 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:25:30.767257   64930 system_pods.go:86] 9 kube-system pods found
	I0920 22:25:30.767285   64930 system_pods.go:89] "coredns-7c65d6cfc9-dpwmm" [6bd8b8dc-b176-4d2a-affc-63e02eb9fba3] Running
	I0920 22:25:30.767291   64930 system_pods.go:89] "coredns-7c65d6cfc9-lgvzf" [58852f70-36e8-4972-8a6f-b04d8fc9d9f3] Running
	I0920 22:25:30.767295   64930 system_pods.go:89] "etcd-default-k8s-diff-port-799783" [50316bfb-5f45-4685-a331-896f9ecbf691] Running
	I0920 22:25:30.767309   64930 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-799783" [f5148101-179c-45e3-a7fc-b4b8a95c8b13] Running
	I0920 22:25:30.767313   64930 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-799783" [c4c595bb-40bd-4ad6-b121-43f539fc9d76] Running
	I0920 22:25:30.767318   64930 system_pods.go:89] "kube-proxy-h7j48" [0fe6b074-a959-4f74-a981-e2303126c3ab] Running
	I0920 22:25:30.767323   64930 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-799783" [369b18a4-6c12-403a-889d-4365264a29bf] Running
	I0920 22:25:30.767331   64930 system_pods.go:89] "metrics-server-6867b74b74-pjcvw" [0b1b708c-664e-4d6c-8da2-79941793d8d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:25:30.767337   64930 system_pods.go:89] "storage-provisioner" [6149617c-dd2c-4ac4-ad1b-80f4943d966f] Running
	I0920 22:25:30.767348   64930 system_pods.go:126] duration metric: took 203.493638ms to wait for k8s-apps to be running ...
	I0920 22:25:30.767362   64930 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:25:30.767405   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:30.784553   64930 system_svc.go:56] duration metric: took 17.183184ms WaitForService to wait for kubelet
	I0920 22:25:30.784583   64930 kubeadm.go:582] duration metric: took 10.341510649s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:25:30.784612   64930 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:25:30.963140   64930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:25:30.963166   64930 node_conditions.go:123] node cpu capacity is 2
	I0920 22:25:30.963177   64930 node_conditions.go:105] duration metric: took 178.56013ms to run NodePressure ...
	I0920 22:25:30.963187   64930 start.go:241] waiting for startup goroutines ...
	I0920 22:25:30.963194   64930 start.go:246] waiting for cluster config update ...
	I0920 22:25:30.963204   64930 start.go:255] writing updated cluster config ...
	I0920 22:25:30.963504   64930 ssh_runner.go:195] Run: rm -f paused
	I0920 22:25:31.013117   64930 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:25:31.015183   64930 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-799783" cluster and "default" namespace by default
	I0920 22:25:29.779849   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:31.772018   64076 pod_ready.go:82] duration metric: took 4m0.000779939s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" ...
	E0920 22:25:31.772048   64076 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 22:25:31.772065   64076 pod_ready.go:39] duration metric: took 4m8.986742832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:31.772089   64076 kubeadm.go:597] duration metric: took 4m16.808660952s to restartPrimaryControlPlane
	W0920 22:25:31.772139   64076 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:25:31.772162   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:25:35.255954   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:35.256195   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:45.256746   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:45.257006   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:58.118910   64076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.346718333s)
	I0920 22:25:58.118998   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:58.138485   64076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:25:58.150312   64076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:25:58.163089   64076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:25:58.163114   64076 kubeadm.go:157] found existing configuration files:
	
	I0920 22:25:58.163165   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:25:58.173792   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:25:58.173847   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:25:58.185186   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:25:58.195835   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:25:58.195893   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:25:58.206121   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:25:58.216016   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:25:58.216076   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:25:58.226242   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:25:58.235841   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:25:58.235907   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:25:58.245684   64076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:25:58.295781   64076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:25:58.295913   64076 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:25:58.411688   64076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:25:58.411802   64076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:25:58.411912   64076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:25:58.421924   64076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:25:58.424166   64076 out.go:235]   - Generating certificates and keys ...
	I0920 22:25:58.424264   64076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:25:58.424355   64076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:25:58.424474   64076 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:25:58.424562   64076 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:25:58.424667   64076 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:25:58.424735   64076 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:25:58.424804   64076 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:25:58.424874   64076 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:25:58.424955   64076 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:25:58.425072   64076 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:25:58.425145   64076 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:25:58.425227   64076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:25:58.541762   64076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:25:58.784016   64076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:25:58.942020   64076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:25:59.024750   64076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:25:59.348415   64076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:25:59.348852   64076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:25:59.351325   64076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:25:59.353421   64076 out.go:235]   - Booting up control plane ...
	I0920 22:25:59.353551   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:25:59.353681   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:25:59.354049   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:25:59.373482   64076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:25:59.383527   64076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:25:59.383614   64076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:25:59.518812   64076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:25:59.519013   64076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:26:00.520303   64076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001563044s
	I0920 22:26:00.520408   64076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:26:05.522834   64076 kubeadm.go:310] [api-check] The API server is healthy after 5.002467309s
	I0920 22:26:05.536106   64076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:26:05.552715   64076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:26:05.578398   64076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:26:05.578647   64076 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-032884 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:26:05.590816   64076 kubeadm.go:310] [bootstrap-token] Using token: 3j77z9.7tnukhkmyvtf2f59
	I0920 22:26:05.258121   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:05.258367   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:05.592300   64076 out.go:235]   - Configuring RBAC rules ...
	I0920 22:26:05.592451   64076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:26:05.600458   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:26:05.608494   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:26:05.612079   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:26:05.617101   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:26:05.627400   64076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:26:05.930323   64076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:26:06.388375   64076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:26:06.934433   64076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:26:06.934459   64076 kubeadm.go:310] 
	I0920 22:26:06.934541   64076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:26:06.934552   64076 kubeadm.go:310] 
	I0920 22:26:06.934644   64076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:26:06.934655   64076 kubeadm.go:310] 
	I0920 22:26:06.934694   64076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:26:06.934824   64076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:26:06.934916   64076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:26:06.934950   64076 kubeadm.go:310] 
	I0920 22:26:06.935050   64076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:26:06.935067   64076 kubeadm.go:310] 
	I0920 22:26:06.935125   64076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:26:06.935136   64076 kubeadm.go:310] 
	I0920 22:26:06.935222   64076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:26:06.935325   64076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:26:06.935429   64076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:26:06.935438   64076 kubeadm.go:310] 
	I0920 22:26:06.935567   64076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:26:06.935719   64076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:26:06.935734   64076 kubeadm.go:310] 
	I0920 22:26:06.935841   64076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3j77z9.7tnukhkmyvtf2f59 \
	I0920 22:26:06.935978   64076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 22:26:06.936005   64076 kubeadm.go:310] 	--control-plane 
	I0920 22:26:06.936013   64076 kubeadm.go:310] 
	I0920 22:26:06.936116   64076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:26:06.936130   64076 kubeadm.go:310] 
	I0920 22:26:06.936241   64076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3j77z9.7tnukhkmyvtf2f59 \
	I0920 22:26:06.936376   64076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 22:26:06.937407   64076 kubeadm.go:310] W0920 22:25:58.249567    2984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:26:06.937837   64076 kubeadm.go:310] W0920 22:25:58.250340    2984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:26:06.937982   64076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:26:06.938009   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:26:06.938021   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:26:06.939944   64076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:26:06.941272   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:26:06.952238   64076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:26:06.973391   64076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:26:06.973493   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:06.973530   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-032884 minikube.k8s.io/updated_at=2024_09_20T22_26_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=no-preload-032884 minikube.k8s.io/primary=true
	I0920 22:26:07.020571   64076 ops.go:34] apiserver oom_adj: -16
	I0920 22:26:07.194238   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:07.694543   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:08.194516   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:08.694927   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:09.195240   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:09.694399   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.194851   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.695301   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.816261   64076 kubeadm.go:1113] duration metric: took 3.842834289s to wait for elevateKubeSystemPrivileges
	I0920 22:26:10.816307   64076 kubeadm.go:394] duration metric: took 4m55.906657261s to StartCluster
	I0920 22:26:10.816334   64076 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:26:10.816423   64076 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:26:10.818137   64076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:26:10.818430   64076 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:26:10.818544   64076 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:26:10.818635   64076 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:26:10.818650   64076 addons.go:69] Setting storage-provisioner=true in profile "no-preload-032884"
	I0920 22:26:10.818673   64076 addons.go:234] Setting addon storage-provisioner=true in "no-preload-032884"
	I0920 22:26:10.818683   64076 addons.go:69] Setting default-storageclass=true in profile "no-preload-032884"
	W0920 22:26:10.818695   64076 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:26:10.818695   64076 addons.go:69] Setting metrics-server=true in profile "no-preload-032884"
	I0920 22:26:10.818709   64076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-032884"
	I0920 22:26:10.818731   64076 addons.go:234] Setting addon metrics-server=true in "no-preload-032884"
	I0920 22:26:10.818739   64076 host.go:66] Checking if "no-preload-032884" exists ...
	W0920 22:26:10.818744   64076 addons.go:243] addon metrics-server should already be in state true
	I0920 22:26:10.818769   64076 host.go:66] Checking if "no-preload-032884" exists ...
	I0920 22:26:10.819164   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819180   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819179   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819213   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.819222   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.819331   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.820951   64076 out.go:177] * Verifying Kubernetes components...
	I0920 22:26:10.822713   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:26:10.836801   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0920 22:26:10.837338   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.837962   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.837987   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.838418   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.839070   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.839116   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.840425   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0920 22:26:10.840549   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34443
	I0920 22:26:10.840871   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.840930   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.841366   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.841392   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.841486   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.841510   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.841775   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.841840   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.841983   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.842400   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.842439   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.845507   64076 addons.go:234] Setting addon default-storageclass=true in "no-preload-032884"
	W0920 22:26:10.845528   64076 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:26:10.845554   64076 host.go:66] Checking if "no-preload-032884" exists ...
	I0920 22:26:10.845888   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.845930   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.856667   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I0920 22:26:10.857183   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.857701   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.857717   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.858050   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.858233   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.858557   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38281
	I0920 22:26:10.858951   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.859570   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.859590   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.859931   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.860098   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.860214   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.861837   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.862616   64076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:26:10.863540   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0920 22:26:10.863601   64076 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:26:10.863918   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.864542   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.864562   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.864714   64076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:26:10.864732   64076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:26:10.864750   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.864852   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.865319   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.865348   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.865572   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:26:10.865590   64076 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:26:10.865611   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.868063   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868471   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.868493   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868500   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868687   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.868968   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.868990   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.869012   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.869160   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.869216   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.869298   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:10.869356   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.869480   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.869584   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:10.908239   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0920 22:26:10.910305   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.910864   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.910889   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.911286   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.911498   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.913134   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.913353   64076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:26:10.913369   64076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:26:10.913385   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.916151   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.916720   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.916751   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.916974   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.917122   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.917235   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.917322   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:11.062110   64076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:26:11.091616   64076 node_ready.go:35] waiting up to 6m0s for node "no-preload-032884" to be "Ready" ...
	I0920 22:26:11.109422   64076 node_ready.go:49] node "no-preload-032884" has status "Ready":"True"
	I0920 22:26:11.109446   64076 node_ready.go:38] duration metric: took 17.79737ms for node "no-preload-032884" to be "Ready" ...
	I0920 22:26:11.109455   64076 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:26:11.117996   64076 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:11.162435   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:26:11.251093   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:26:11.251120   64076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:26:11.267736   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:26:11.300802   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:26:11.300828   64076 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:26:11.417756   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:26:11.417784   64076 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:26:11.467144   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:26:11.490750   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.490776   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.491084   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.491093   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:11.491103   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.491127   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.491142   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.492893   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:11.492899   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.492913   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.526730   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.526758   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.527068   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.527113   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.527142   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.029394   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.029419   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.029728   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.029734   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.029753   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.029780   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.029788   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.030049   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.030060   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.725440   64076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.258247375s)
	I0920 22:26:12.725497   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.725509   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.725835   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.725896   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.725909   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.725929   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.725937   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.726239   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.726256   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.726268   64076 addons.go:475] Verifying addon metrics-server=true in "no-preload-032884"
	I0920 22:26:12.728264   64076 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 22:26:12.729421   64076 addons.go:510] duration metric: took 1.910882335s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 22:26:13.131514   64076 pod_ready.go:103] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:15.624357   64076 pod_ready.go:103] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:16.624382   64076 pod_ready.go:93] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:16.624410   64076 pod_ready.go:82] duration metric: took 5.506388589s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:16.624419   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:18.631927   64076 pod_ready.go:103] pod "kube-apiserver-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:19.132302   64076 pod_ready.go:93] pod "kube-apiserver-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.132326   64076 pod_ready.go:82] duration metric: took 2.507900074s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.132336   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.137816   64076 pod_ready.go:93] pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.137841   64076 pod_ready.go:82] duration metric: took 5.497911ms for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.137853   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fdc7z" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.143195   64076 pod_ready.go:93] pod "kube-proxy-fdc7z" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.143225   64076 pod_ready.go:82] duration metric: took 5.363183ms for pod "kube-proxy-fdc7z" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.143236   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.148171   64076 pod_ready.go:93] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.148193   64076 pod_ready.go:82] duration metric: took 4.948914ms for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.148200   64076 pod_ready.go:39] duration metric: took 8.038734304s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:26:19.148214   64076 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:26:19.148259   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:26:19.167764   64076 api_server.go:72] duration metric: took 8.349299959s to wait for apiserver process to appear ...
	I0920 22:26:19.167789   64076 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:26:19.167809   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:26:19.173104   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 200:
	ok
	I0920 22:26:19.174150   64076 api_server.go:141] control plane version: v1.31.1
	I0920 22:26:19.174171   64076 api_server.go:131] duration metric: took 6.376066ms to wait for apiserver health ...
	I0920 22:26:19.174178   64076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:26:19.181050   64076 system_pods.go:59] 9 kube-system pods found
	I0920 22:26:19.181083   64076 system_pods.go:61] "coredns-7c65d6cfc9-fbxtz" [30e7b041-7437-4dec-bea2-4512c1c233d8] Running
	I0920 22:26:19.181091   64076 system_pods.go:61] "coredns-7c65d6cfc9-jsx2x" [6b4bdbac-ba8e-434b-80b7-6ade0c7cc363] Running
	I0920 22:26:19.181097   64076 system_pods.go:61] "etcd-no-preload-032884" [dd023e30-352b-4578-85b8-27fed6b64110] Running
	I0920 22:26:19.181102   64076 system_pods.go:61] "kube-apiserver-no-preload-032884" [5a644708-3886-4534-b936-b4931e804a04] Running
	I0920 22:26:19.181106   64076 system_pods.go:61] "kube-controller-manager-no-preload-032884" [6c871a06-842c-4588-a16f-2ceb8eef6565] Running
	I0920 22:26:19.181113   64076 system_pods.go:61] "kube-proxy-fdc7z" [7a28610e-aa20-4aed-a9a4-1e125f03445b] Running
	I0920 22:26:19.181117   64076 system_pods.go:61] "kube-scheduler-no-preload-032884" [6d878841-99c2-4b0a-95fa-01fbc185e78c] Running
	I0920 22:26:19.181126   64076 system_pods.go:61] "metrics-server-6867b74b74-xvbg9" [33c2a642-1336-4e1c-98cf-a1941710c39f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:26:19.181131   64076 system_pods.go:61] "storage-provisioner" [18366542-5571-4ff9-885b-f3aaa3106905] Running
	I0920 22:26:19.181143   64076 system_pods.go:74] duration metric: took 6.958801ms to wait for pod list to return data ...
	I0920 22:26:19.181152   64076 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:26:19.185152   64076 default_sa.go:45] found service account: "default"
	I0920 22:26:19.185178   64076 default_sa.go:55] duration metric: took 4.016584ms for default service account to be created ...
	I0920 22:26:19.185188   64076 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:26:19.333687   64076 system_pods.go:86] 9 kube-system pods found
	I0920 22:26:19.333721   64076 system_pods.go:89] "coredns-7c65d6cfc9-fbxtz" [30e7b041-7437-4dec-bea2-4512c1c233d8] Running
	I0920 22:26:19.333729   64076 system_pods.go:89] "coredns-7c65d6cfc9-jsx2x" [6b4bdbac-ba8e-434b-80b7-6ade0c7cc363] Running
	I0920 22:26:19.333734   64076 system_pods.go:89] "etcd-no-preload-032884" [dd023e30-352b-4578-85b8-27fed6b64110] Running
	I0920 22:26:19.333740   64076 system_pods.go:89] "kube-apiserver-no-preload-032884" [5a644708-3886-4534-b936-b4931e804a04] Running
	I0920 22:26:19.333748   64076 system_pods.go:89] "kube-controller-manager-no-preload-032884" [6c871a06-842c-4588-a16f-2ceb8eef6565] Running
	I0920 22:26:19.333753   64076 system_pods.go:89] "kube-proxy-fdc7z" [7a28610e-aa20-4aed-a9a4-1e125f03445b] Running
	I0920 22:26:19.333759   64076 system_pods.go:89] "kube-scheduler-no-preload-032884" [6d878841-99c2-4b0a-95fa-01fbc185e78c] Running
	I0920 22:26:19.333768   64076 system_pods.go:89] "metrics-server-6867b74b74-xvbg9" [33c2a642-1336-4e1c-98cf-a1941710c39f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:26:19.333778   64076 system_pods.go:89] "storage-provisioner" [18366542-5571-4ff9-885b-f3aaa3106905] Running
	I0920 22:26:19.333792   64076 system_pods.go:126] duration metric: took 148.596279ms to wait for k8s-apps to be running ...
	I0920 22:26:19.333804   64076 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:26:19.333859   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:26:19.352068   64076 system_svc.go:56] duration metric: took 18.255575ms WaitForService to wait for kubelet
	I0920 22:26:19.352101   64076 kubeadm.go:582] duration metric: took 8.533641688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:26:19.352120   64076 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:26:19.529468   64076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:26:19.529499   64076 node_conditions.go:123] node cpu capacity is 2
	I0920 22:26:19.529510   64076 node_conditions.go:105] duration metric: took 177.384986ms to run NodePressure ...
	I0920 22:26:19.529521   64076 start.go:241] waiting for startup goroutines ...
	I0920 22:26:19.529527   64076 start.go:246] waiting for cluster config update ...
	I0920 22:26:19.529536   64076 start.go:255] writing updated cluster config ...
	I0920 22:26:19.529794   64076 ssh_runner.go:195] Run: rm -f paused
	I0920 22:26:19.578797   64076 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:26:19.581150   64076 out.go:177] * Done! kubectl is now configured to use "no-preload-032884" cluster and "default" namespace by default
	I0920 22:26:45.260713   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:45.260972   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:45.260987   65164 kubeadm.go:310] 
	I0920 22:26:45.261036   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:26:45.261079   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:26:45.261089   65164 kubeadm.go:310] 
	I0920 22:26:45.261139   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:26:45.261186   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:26:45.261335   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:26:45.261345   65164 kubeadm.go:310] 
	I0920 22:26:45.261491   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:26:45.261525   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:26:45.261554   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:26:45.261560   65164 kubeadm.go:310] 
	I0920 22:26:45.261648   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:26:45.261717   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:26:45.261723   65164 kubeadm.go:310] 
	I0920 22:26:45.261812   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:26:45.261917   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:26:45.262021   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:26:45.262113   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:26:45.262126   65164 kubeadm.go:310] 
	I0920 22:26:45.263170   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:26:45.263288   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:26:45.263381   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 22:26:45.263497   65164 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 22:26:45.263544   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:26:50.654066   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.390496163s)
	I0920 22:26:50.654143   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:26:50.670366   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:26:50.682320   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:26:50.682340   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:26:50.682395   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:26:50.692427   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:26:50.692501   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:26:50.703008   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:26:50.713027   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:26:50.713090   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:26:50.724408   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.733746   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:26:50.733805   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.744669   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:26:50.753996   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:26:50.754061   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:26:50.764484   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:26:50.836541   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:26:50.836606   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:26:50.979844   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:26:50.979987   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:26:50.980117   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:26:51.172382   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:26:51.174186   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:26:51.174313   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:26:51.174410   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:26:51.174550   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:26:51.174640   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:26:51.174768   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:26:51.174855   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:26:51.175252   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:26:51.175637   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:26:51.176082   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:26:51.176400   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:26:51.176484   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:26:51.176562   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:26:51.391581   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:26:51.474902   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:26:51.620828   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:26:51.838880   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:26:51.856375   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:26:51.857522   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:26:51.857615   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:26:52.014694   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:26:52.016872   65164 out.go:235]   - Booting up control plane ...
	I0920 22:26:52.017002   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:26:52.019915   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:26:52.021285   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:26:52.022532   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:26:52.026230   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:27:32.029658   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:27:32.029764   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:32.030051   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:37.030416   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:37.030634   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:47.031397   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:47.031644   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:07.031668   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:07.031918   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030492   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:47.030797   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030821   65164 kubeadm.go:310] 
	I0920 22:28:47.030874   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:28:47.030912   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:28:47.030919   65164 kubeadm.go:310] 
	I0920 22:28:47.030982   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:28:47.031046   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:28:47.031204   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:28:47.031225   65164 kubeadm.go:310] 
	I0920 22:28:47.031371   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:28:47.031425   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:28:47.031462   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:28:47.031466   65164 kubeadm.go:310] 
	I0920 22:28:47.031605   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:28:47.031727   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:28:47.031739   65164 kubeadm.go:310] 
	I0920 22:28:47.031880   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:28:47.032003   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:28:47.032102   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:28:47.032202   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:28:47.032212   65164 kubeadm.go:310] 
	I0920 22:28:47.033045   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:28:47.033157   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:28:47.033247   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 22:28:47.033315   65164 kubeadm.go:394] duration metric: took 8m3.299765846s to StartCluster
	I0920 22:28:47.033360   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:28:47.033452   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:28:47.081201   65164 cri.go:89] found id: ""
	I0920 22:28:47.081225   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.081233   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:28:47.081239   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:28:47.081289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:28:47.119048   65164 cri.go:89] found id: ""
	I0920 22:28:47.119081   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.119089   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:28:47.119094   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:28:47.119140   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:28:47.164690   65164 cri.go:89] found id: ""
	I0920 22:28:47.164719   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.164730   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:28:47.164738   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:28:47.164806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:28:47.201003   65164 cri.go:89] found id: ""
	I0920 22:28:47.201035   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.201046   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:28:47.201053   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:28:47.201114   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:28:47.238478   65164 cri.go:89] found id: ""
	I0920 22:28:47.238510   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.238522   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:28:47.238530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:28:47.238579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:28:47.274372   65164 cri.go:89] found id: ""
	I0920 22:28:47.274406   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.274415   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:28:47.274422   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:28:47.274487   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:28:47.310155   65164 cri.go:89] found id: ""
	I0920 22:28:47.310181   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.310191   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:28:47.310197   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:28:47.310243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:28:47.350845   65164 cri.go:89] found id: ""
	I0920 22:28:47.350872   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.350882   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:28:47.350894   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:28:47.350908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:28:47.403879   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:28:47.403915   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:28:47.418989   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:28:47.419021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:28:47.511632   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:28:47.511697   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:28:47.511718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:28:47.635613   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:28:47.635653   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0920 22:28:47.677326   65164 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 22:28:47.677383   65164 out.go:270] * 
	W0920 22:28:47.677436   65164 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.677449   65164 out.go:270] * 
	W0920 22:28:47.678365   65164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 22:28:47.681590   65164 out.go:201] 
	W0920 22:28:47.682658   65164 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.682696   65164 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 22:28:47.682719   65164 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 22:28:47.684042   65164 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.546518012Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c70dfe1b-e1f7-482f-984e-62eb4ed6b5cc name=/runtime.v1.RuntimeService/Version
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.547616508Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d33b44de-f584-495a-9a47-5499409f4636 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.548094451Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871619548071275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d33b44de-f584-495a-9a47-5499409f4636 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.548625146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e9763dc-256f-4976-acee-e897d0c4b8dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.548678886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e9763dc-256f-4976-acee-e897d0c4b8dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.549689896Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726870838738106130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae255cbc7eac6c747ca8b618ded6209da5594c3cd32f6ac6642aa6c620fbd03,PodSandboxId:a9f41ee2c3cfd87823e58bf71fc648b77d12adf0f028d5470deaa29ca8612ecc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726870825817451940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 246386c0-503a-4f46-8c53-5ce47131b001,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1,PodSandboxId:50e558f13ca129fc5f09c8f0d643272f8abceaded2ea31db7ae18c7772c7e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870823697860081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-46wxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfc28b7-4bdf-4f37-82fb-59316bec0274,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640,PodSandboxId:30749e574f00b61292a12c6baf79eefa94f06e2a36efeec757da03c5fa03f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726870807932121496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wsrnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 173d3a74-5546-4d40-8
111-0607c3cbdbba,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726870807874180910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8
137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f,PodSandboxId:aaf3dad6ece4cba511ae2bd28f04f3c3de4b305261136f1c4d046f5117f790a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726870804251294049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8084b96192d2fb41fda359d2bedb8dd,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b,PodSandboxId:74ce2ef4cd63c58ee00af804bced9469b99e82ff15f5e9564a2ff4f370890567,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726870804247710234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe55b2c88ce29a7a29e5f88b95b0f49,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a,PodSandboxId:2f465f090e7f70a7c7e929c6e5e3ac3322e8608b8a5f342975d4558db4fd6ae3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726870804132686181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b94e24003ba343ba26d810925fbbdd,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097,PodSandboxId:9c4d734f964fed1c80a121b8e253c1be64cf33bbbc6c6189f8533c6aa096e5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726870804157192152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c668b42913ed1686fab257328e725b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e9763dc-256f-4976-acee-e897d0c4b8dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.586217098Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=7ecd2384-ce43-438c-992c-44632f8f8f44 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.586478505Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a9f41ee2c3cfd87823e58bf71fc648b77d12adf0f028d5470deaa29ca8612ecc,Metadata:&PodSandboxMetadata{Name:busybox,Uid:246386c0-503a-4f46-8c53-5ce47131b001,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726870823358481713,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 246386c0-503a-4f46-8c53-5ce47131b001,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T22:20:07.463603993Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:50e558f13ca129fc5f09c8f0d643272f8abceaded2ea31db7ae18c7772c7e6f5,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-46wxm,Uid:9bfc28b7-4bdf-4f37-82fb-59316bec0274,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726870823354461
071,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-46wxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfc28b7-4bdf-4f37-82fb-59316bec0274,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T22:20:07.463596993Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fb38415a8ae33b1cf177779c558d160d8eeb8830b782dc57b6b508bebf52fb4c,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-4j9zb,Uid:cce79400-7958-4b2f-bcb0-2e59046134cd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726870814537403341,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-4j9zb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cce79400-7958-4b2f-bcb0-2e59046134cd,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T22:20:07.
463542830Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:12d59a28-0756-4027-bd21-97fdc86a8137,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726870807779546480,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8137,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-
minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-20T22:20:07.463544211Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:30749e574f00b61292a12c6baf79eefa94f06e2a36efeec757da03c5fa03f98d,Metadata:&PodSandboxMetadata{Name:kube-proxy-wsrnw,Uid:173d3a74-5546-4d40-8111-0607c3cbdbba,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726870807777855754,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-wsrnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 173d3a74-5546-4d40-8111-0607c3cbdbba,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.i
o/config.seen: 2024-09-20T22:20:07.463537150Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f465f090e7f70a7c7e929c6e5e3ac3322e8608b8a5f342975d4558db4fd6ae3,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-562750,Uid:74b94e24003ba343ba26d810925fbbdd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726870803967841643,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b94e24003ba343ba26d810925fbbdd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 74b94e24003ba343ba26d810925fbbdd,kubernetes.io/config.seen: 2024-09-20T22:20:03.462786951Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aaf3dad6ece4cba511ae2bd28f04f3c3de4b305261136f1c4d046f5117f790a7,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-562750,Uid:b8084b96192d2fb41fda359
d2bedb8dd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726870803964346037,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8084b96192d2fb41fda359d2bedb8dd,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.193:2379,kubernetes.io/config.hash: b8084b96192d2fb41fda359d2bedb8dd,kubernetes.io/config.seen: 2024-09-20T22:20:03.526353209Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9c4d734f964fed1c80a121b8e253c1be64cf33bbbc6c6189f8533c6aa096e5be,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-562750,Uid:26c668b42913ed1686fab257328e725b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726870803950667359,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-562
750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c668b42913ed1686fab257328e725b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.193:8443,kubernetes.io/config.hash: 26c668b42913ed1686fab257328e725b,kubernetes.io/config.seen: 2024-09-20T22:20:03.462782516Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:74ce2ef4cd63c58ee00af804bced9469b99e82ff15f5e9564a2ff4f370890567,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-562750,Uid:1fe55b2c88ce29a7a29e5f88b95b0f49,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726870803948389060,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe55b2c88ce29a7a29e5f88b95b0f49,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1fe55b2c88ce29a7a29e5f88b9
5b0f49,kubernetes.io/config.seen: 2024-09-20T22:20:03.462788040Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=7ecd2384-ce43-438c-992c-44632f8f8f44 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.587303878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e5acdc53-71cc-48d4-8035-fa83a5e834cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.587379986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e5acdc53-71cc-48d4-8035-fa83a5e834cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.587575505Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726870838738106130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae255cbc7eac6c747ca8b618ded6209da5594c3cd32f6ac6642aa6c620fbd03,PodSandboxId:a9f41ee2c3cfd87823e58bf71fc648b77d12adf0f028d5470deaa29ca8612ecc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726870825817451940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 246386c0-503a-4f46-8c53-5ce47131b001,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1,PodSandboxId:50e558f13ca129fc5f09c8f0d643272f8abceaded2ea31db7ae18c7772c7e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870823697860081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-46wxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfc28b7-4bdf-4f37-82fb-59316bec0274,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640,PodSandboxId:30749e574f00b61292a12c6baf79eefa94f06e2a36efeec757da03c5fa03f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726870807932121496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wsrnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 173d3a74-5546-4d40-8
111-0607c3cbdbba,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726870807874180910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8
137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f,PodSandboxId:aaf3dad6ece4cba511ae2bd28f04f3c3de4b305261136f1c4d046f5117f790a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726870804251294049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8084b96192d2fb41fda359d2bedb8dd,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b,PodSandboxId:74ce2ef4cd63c58ee00af804bced9469b99e82ff15f5e9564a2ff4f370890567,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726870804247710234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe55b2c88ce29a7a29e5f88b95b0f49,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a,PodSandboxId:2f465f090e7f70a7c7e929c6e5e3ac3322e8608b8a5f342975d4558db4fd6ae3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726870804132686181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b94e24003ba343ba26d810925fbbdd,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097,PodSandboxId:9c4d734f964fed1c80a121b8e253c1be64cf33bbbc6c6189f8533c6aa096e5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726870804157192152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c668b42913ed1686fab257328e725b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e5acdc53-71cc-48d4-8035-fa83a5e834cf name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.594012672Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1adec40-09f2-4af1-9825-a9d40d9a8d61 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.594095519Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1adec40-09f2-4af1-9825-a9d40d9a8d61 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.595514760Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b7d862ad-472a-40e8-b7e1-9e27002f35c6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.596000308Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871619595978189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b7d862ad-472a-40e8-b7e1-9e27002f35c6 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.596443679Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d4921025-49d7-49d5-944b-88e844d6664b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.596511520Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d4921025-49d7-49d5-944b-88e844d6664b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.596798638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726870838738106130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae255cbc7eac6c747ca8b618ded6209da5594c3cd32f6ac6642aa6c620fbd03,PodSandboxId:a9f41ee2c3cfd87823e58bf71fc648b77d12adf0f028d5470deaa29ca8612ecc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726870825817451940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 246386c0-503a-4f46-8c53-5ce47131b001,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1,PodSandboxId:50e558f13ca129fc5f09c8f0d643272f8abceaded2ea31db7ae18c7772c7e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870823697860081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-46wxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfc28b7-4bdf-4f37-82fb-59316bec0274,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640,PodSandboxId:30749e574f00b61292a12c6baf79eefa94f06e2a36efeec757da03c5fa03f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726870807932121496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wsrnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 173d3a74-5546-4d40-8
111-0607c3cbdbba,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726870807874180910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8
137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f,PodSandboxId:aaf3dad6ece4cba511ae2bd28f04f3c3de4b305261136f1c4d046f5117f790a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726870804251294049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8084b96192d2fb41fda359d2bedb8dd,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b,PodSandboxId:74ce2ef4cd63c58ee00af804bced9469b99e82ff15f5e9564a2ff4f370890567,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726870804247710234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe55b2c88ce29a7a29e5f88b95b0f49,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a,PodSandboxId:2f465f090e7f70a7c7e929c6e5e3ac3322e8608b8a5f342975d4558db4fd6ae3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726870804132686181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b94e24003ba343ba26d810925fbbdd,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097,PodSandboxId:9c4d734f964fed1c80a121b8e253c1be64cf33bbbc6c6189f8533c6aa096e5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726870804157192152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c668b42913ed1686fab257328e725b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d4921025-49d7-49d5-944b-88e844d6664b name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.632397258Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=580c8d6f-fec2-4d69-b5f1-a315a3fc4239 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.632490218Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=580c8d6f-fec2-4d69-b5f1-a315a3fc4239 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.634018624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a06ce4f-1193-4396-a6f0-296d6d75e20c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.634401684Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871619634379952,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a06ce4f-1193-4396-a6f0-296d6d75e20c name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.635024002Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7425e3a-63cd-4cba-abbc-33b4f7140cfa name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.635089981Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7425e3a-63cd-4cba-abbc-33b4f7140cfa name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:33:39 embed-certs-562750 crio[722]: time="2024-09-20 22:33:39.635271880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726870838738106130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae255cbc7eac6c747ca8b618ded6209da5594c3cd32f6ac6642aa6c620fbd03,PodSandboxId:a9f41ee2c3cfd87823e58bf71fc648b77d12adf0f028d5470deaa29ca8612ecc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726870825817451940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 246386c0-503a-4f46-8c53-5ce47131b001,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1,PodSandboxId:50e558f13ca129fc5f09c8f0d643272f8abceaded2ea31db7ae18c7772c7e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870823697860081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-46wxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfc28b7-4bdf-4f37-82fb-59316bec0274,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640,PodSandboxId:30749e574f00b61292a12c6baf79eefa94f06e2a36efeec757da03c5fa03f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726870807932121496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wsrnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 173d3a74-5546-4d40-8
111-0607c3cbdbba,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726870807874180910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8
137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f,PodSandboxId:aaf3dad6ece4cba511ae2bd28f04f3c3de4b305261136f1c4d046f5117f790a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726870804251294049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8084b96192d2fb41fda359d2bedb8dd,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b,PodSandboxId:74ce2ef4cd63c58ee00af804bced9469b99e82ff15f5e9564a2ff4f370890567,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726870804247710234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe55b2c88ce29a7a29e5f88b95b0f49,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a,PodSandboxId:2f465f090e7f70a7c7e929c6e5e3ac3322e8608b8a5f342975d4558db4fd6ae3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726870804132686181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b94e24003ba343ba26d810925fbbdd,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097,PodSandboxId:9c4d734f964fed1c80a121b8e253c1be64cf33bbbc6c6189f8533c6aa096e5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726870804157192152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c668b42913ed1686fab257328e725b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7425e3a-63cd-4cba-abbc-33b4f7140cfa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4cb8dacd4796a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   7dc9d2321abd1       storage-provisioner
	8ae255cbc7eac       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   a9f41ee2c3cfd       busybox
	c6e4193ea7db3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   50e558f13ca12       coredns-7c65d6cfc9-46wxm
	e7fe3b2aa3fbf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   30749e574f00b       kube-proxy-wsrnw
	a70c22cc89dec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   7dc9d2321abd1       storage-provisioner
	de480c158751f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   aaf3dad6ece4c       etcd-embed-certs-562750
	80549c2470100       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   74ce2ef4cd63c       kube-scheduler-embed-certs-562750
	15ee26017cb99       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   9c4d734f964fe       kube-apiserver-embed-certs-562750
	065928d7289f2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   2f465f090e7f7       kube-controller-manager-embed-certs-562750
	
	
	==> coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43434 - 44150 "HINFO IN 3449325025002424531.9216361121714409084. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009921692s
	
	
	==> describe nodes <==
	Name:               embed-certs-562750
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-562750
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=embed-certs-562750
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T22_12_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 22:12:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-562750
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 22:33:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 22:30:50 +0000   Fri, 20 Sep 2024 22:12:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 22:30:50 +0000   Fri, 20 Sep 2024 22:12:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 22:30:50 +0000   Fri, 20 Sep 2024 22:12:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 22:30:50 +0000   Fri, 20 Sep 2024 22:20:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.193
	  Hostname:    embed-certs-562750
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 92775a493d91427b88236f272b18a57d
	  System UUID:                92775a49-3d91-427b-8823-6f272b18a57d
	  Boot ID:                    81af2517-1757-4a57-870d-ac70eddc1f48
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-46wxm                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-embed-certs-562750                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-562750             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-562750    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-wsrnw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-562750             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-4j9zb               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m                kubelet          Node embed-certs-562750 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node embed-certs-562750 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-562750 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node embed-certs-562750 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-562750 event: Registered Node embed-certs-562750 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-562750 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-562750 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-562750 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-562750 event: Registered Node embed-certs-562750 in Controller
	
	
	==> dmesg <==
	[Sep20 22:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051007] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040318] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.825295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.556072] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.664362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.829686] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.060505] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056819] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.182364] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.146152] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.329664] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[Sep20 22:20] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.066860] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.126504] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +4.591425] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.975068] systemd-fstab-generator[1556]: Ignoring "noauto" option for root device
	[  +3.760248] kauditd_printk_skb: 64 callbacks suppressed
	[  +8.824850] kauditd_printk_skb: 11 callbacks suppressed
	[ +15.427443] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] <==
	{"level":"info","ts":"2024-09-20T22:20:06.099202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae7a0b331b725d08 received MsgVoteResp from ae7a0b331b725d08 at term 3"}
	{"level":"info","ts":"2024-09-20T22:20:06.099210Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ae7a0b331b725d08 became leader at term 3"}
	{"level":"info","ts":"2024-09-20T22:20:06.099219Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ae7a0b331b725d08 elected leader ae7a0b331b725d08 at term 3"}
	{"level":"info","ts":"2024-09-20T22:20:06.101260Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ae7a0b331b725d08","local-member-attributes":"{Name:embed-certs-562750 ClientURLs:[https://192.168.50.193:2379]}","request-path":"/0/members/ae7a0b331b725d08/attributes","cluster-id":"638800dd95597f85","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T22:20:06.101310Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:20:06.101397Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:20:06.101735Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T22:20:06.101779Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T22:20:06.102521Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:20:06.102521Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:20:06.103384Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.193:2379"}
	{"level":"info","ts":"2024-09-20T22:20:06.103677Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-20T22:20:22.468240Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.679214ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6703768649290275979 > lease_revoke:<id:5d0892117e1f446b>","response":"size:28"}
	{"level":"warn","ts":"2024-09-20T22:20:22.720807Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.134463ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6703768649290275980 > lease_revoke:<id:5d0892117e1f4414>","response":"size:28"}
	{"level":"info","ts":"2024-09-20T22:20:22.721035Z","caller":"traceutil/trace.go:171","msg":"trace[38428455] linearizableReadLoop","detail":"{readStateIndex:650; appliedIndex:649; }","duration":"236.850495ms","start":"2024-09-20T22:20:22.484160Z","end":"2024-09-20T22:20:22.721010Z","steps":["trace[38428455] 'read index received'  (duration: 40.114µs)","trace[38428455] 'applied index is now lower than readState.Index'  (duration: 236.808259ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T22:20:22.721089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.77103ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T22:20:22.721150Z","caller":"traceutil/trace.go:171","msg":"trace[2032221090] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:607; }","duration":"210.84654ms","start":"2024-09-20T22:20:22.510294Z","end":"2024-09-20T22:20:22.721140Z","steps":["trace[2032221090] 'range keys from in-memory index tree'  (duration: 210.762891ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T22:20:22.721291Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.122429ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-46wxm\" ","response":"range_response_count:1 size:4949"}
	{"level":"info","ts":"2024-09-20T22:20:22.721358Z","caller":"traceutil/trace.go:171","msg":"trace[1691729838] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-46wxm; range_end:; response_count:1; response_revision:607; }","duration":"237.211531ms","start":"2024-09-20T22:20:22.484137Z","end":"2024-09-20T22:20:22.721349Z","steps":["trace[1691729838] 'agreement among raft nodes before linearized reading'  (duration: 236.966969ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:20:23.479208Z","caller":"traceutil/trace.go:171","msg":"trace[791540973] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"158.739675ms","start":"2024-09-20T22:20:23.320448Z","end":"2024-09-20T22:20:23.479187Z","steps":["trace[791540973] 'process raft request'  (duration: 158.584995ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:20:42.662373Z","caller":"traceutil/trace.go:171","msg":"trace[1807087431] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"124.678178ms","start":"2024-09-20T22:20:42.537669Z","end":"2024-09-20T22:20:42.662348Z","steps":["trace[1807087431] 'process raft request'  (duration: 124.464415ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:20:42.688737Z","caller":"traceutil/trace.go:171","msg":"trace[368319078] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"148.197568ms","start":"2024-09-20T22:20:42.540522Z","end":"2024-09-20T22:20:42.688719Z","steps":["trace[368319078] 'process raft request'  (duration: 147.670156ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:30:06.132668Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":870}
	{"level":"info","ts":"2024-09-20T22:30:06.142408Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":870,"took":"9.415723ms","hash":3562258311,"current-db-size-bytes":2826240,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2826240,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-09-20T22:30:06.142466Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3562258311,"revision":870,"compact-revision":-1}
	
	
	==> kernel <==
	 22:33:39 up 14 min,  0 users,  load average: 0.00, 0.07, 0.07
	Linux embed-certs-562750 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] <==
	E0920 22:30:08.424179       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 22:30:08.424219       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 22:30:08.425399       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:30:08.425432       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 22:31:08.425745       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:31:08.425860       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 22:31:08.425942       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:31:08.425958       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 22:31:08.427091       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:31:08.427136       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 22:33:08.427952       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:33:08.428048       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 22:33:08.427988       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:33:08.428140       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 22:33:08.429442       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:33:08.429476       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] <==
	E0920 22:28:11.057259       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:28:11.532592       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:28:41.064668       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:28:41.542002       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:29:11.071474       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:29:11.551683       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:29:41.077365       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:29:41.559355       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:30:11.084734       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:30:11.566603       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:30:41.092556       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:30:41.573996       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:30:50.844118       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-562750"
	E0920 22:31:11.099186       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:31:11.582841       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:31:23.548343       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="336.539µs"
	I0920 22:31:37.549255       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="138.706µs"
	E0920 22:31:41.104809       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:31:41.590740       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:32:11.110797       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:32:11.598509       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:32:41.117630       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:32:41.606132       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:33:11.124849       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:33:11.614437       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 22:20:08.147523       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 22:20:08.164533       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.193"]
	E0920 22:20:08.164952       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 22:20:08.206011       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 22:20:08.206087       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 22:20:08.206120       1 server_linux.go:169] "Using iptables Proxier"
	I0920 22:20:08.209209       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 22:20:08.209746       1 server.go:483] "Version info" version="v1.31.1"
	I0920 22:20:08.209781       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 22:20:08.212292       1 config.go:199] "Starting service config controller"
	I0920 22:20:08.212351       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 22:20:08.212396       1 config.go:105] "Starting endpoint slice config controller"
	I0920 22:20:08.212423       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 22:20:08.215693       1 config.go:328] "Starting node config controller"
	I0920 22:20:08.215758       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 22:20:08.312821       1 shared_informer.go:320] Caches are synced for service config
	I0920 22:20:08.312719       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 22:20:08.316568       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] <==
	I0920 22:20:05.442204       1 serving.go:386] Generated self-signed cert in-memory
	W0920 22:20:07.346260       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 22:20:07.346352       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 22:20:07.346380       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 22:20:07.346404       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 22:20:07.406543       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 22:20:07.408957       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 22:20:07.413726       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 22:20:07.415771       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 22:20:07.420782       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 22:20:07.415810       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 22:20:07.524092       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 22:32:29 embed-certs-562750 kubelet[932]: E0920 22:32:29.535090     932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4j9zb" podUID="cce79400-7958-4b2f-bcb0-2e59046134cd"
	Sep 20 22:32:33 embed-certs-562750 kubelet[932]: E0920 22:32:33.736140     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871553735575419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:32:33 embed-certs-562750 kubelet[932]: E0920 22:32:33.736466     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871553735575419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:32:43 embed-certs-562750 kubelet[932]: E0920 22:32:43.738642     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871563738249321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:32:43 embed-certs-562750 kubelet[932]: E0920 22:32:43.738689     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871563738249321,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:32:44 embed-certs-562750 kubelet[932]: E0920 22:32:44.532929     932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4j9zb" podUID="cce79400-7958-4b2f-bcb0-2e59046134cd"
	Sep 20 22:32:53 embed-certs-562750 kubelet[932]: E0920 22:32:53.740708     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871573740335604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:32:53 embed-certs-562750 kubelet[932]: E0920 22:32:53.741093     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871573740335604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:32:57 embed-certs-562750 kubelet[932]: E0920 22:32:57.533205     932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4j9zb" podUID="cce79400-7958-4b2f-bcb0-2e59046134cd"
	Sep 20 22:33:03 embed-certs-562750 kubelet[932]: E0920 22:33:03.554654     932 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 22:33:03 embed-certs-562750 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 22:33:03 embed-certs-562750 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 22:33:03 embed-certs-562750 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 22:33:03 embed-certs-562750 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 22:33:03 embed-certs-562750 kubelet[932]: E0920 22:33:03.743447     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871583743016441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:03 embed-certs-562750 kubelet[932]: E0920 22:33:03.743479     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871583743016441,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:10 embed-certs-562750 kubelet[932]: E0920 22:33:10.532703     932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4j9zb" podUID="cce79400-7958-4b2f-bcb0-2e59046134cd"
	Sep 20 22:33:13 embed-certs-562750 kubelet[932]: E0920 22:33:13.747381     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871593746728813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:13 embed-certs-562750 kubelet[932]: E0920 22:33:13.747410     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871593746728813,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:22 embed-certs-562750 kubelet[932]: E0920 22:33:22.532513     932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4j9zb" podUID="cce79400-7958-4b2f-bcb0-2e59046134cd"
	Sep 20 22:33:23 embed-certs-562750 kubelet[932]: E0920 22:33:23.749635     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871603749083111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:23 embed-certs-562750 kubelet[932]: E0920 22:33:23.750022     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871603749083111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:33 embed-certs-562750 kubelet[932]: E0920 22:33:33.752300     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871613751569344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:33 embed-certs-562750 kubelet[932]: E0920 22:33:33.752353     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871613751569344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:34 embed-certs-562750 kubelet[932]: E0920 22:33:34.533765     932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4j9zb" podUID="cce79400-7958-4b2f-bcb0-2e59046134cd"
	
	
	==> storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] <==
	I0920 22:20:38.870805       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 22:20:38.887714       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 22:20:38.887822       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 22:20:56.299861       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 22:20:56.300108       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-562750_d63845e1-eff3-437a-ae7f-6dc114f3e5f6!
	I0920 22:20:56.301124       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2510ee1-0d87-4e10-896d-335b7aa732be", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-562750_d63845e1-eff3-437a-ae7f-6dc114f3e5f6 became leader
	I0920 22:20:56.401157       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-562750_d63845e1-eff3-437a-ae7f-6dc114f3e5f6!
	
	
	==> storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] <==
	I0920 22:20:07.991211       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0920 22:20:37.994647       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-562750 -n embed-certs-562750
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-562750 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4j9zb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-562750 describe pod metrics-server-6867b74b74-4j9zb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-562750 describe pod metrics-server-6867b74b74-4j9zb: exit status 1 (62.905505ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4j9zb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-562750 describe pod metrics-server-6867b74b74-4j9zb: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0920 22:25:34.880198   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799783 -n default-k8s-diff-port-799783
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-20 22:34:31.547159119 +0000 UTC m=+6411.678294102
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799783 -n default-k8s-diff-port-799783
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-799783 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-799783 logs -n 25: (2.225928511s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-335786 -- sudo                         | cert-options-335786          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-335786                                 | cert-options-335786          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-696728                           | kubernetes-upgrade-696728    | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:11 UTC |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-032884             | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-440582 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | disable-driver-mounts-440582                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:14 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-562750            | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799783  | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-032884                  | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-593412        | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-562750                 | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC | 20 Sep 24 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799783       | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:25 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-593412             | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC | 20 Sep 24 22:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 22:17:05
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 22:17:05.861648   65164 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:17:05.861774   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.861787   65164 out.go:358] Setting ErrFile to fd 2...
	I0920 22:17:05.861792   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.862021   65164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:17:05.862599   65164 out.go:352] Setting JSON to false
	I0920 22:17:05.863528   65164 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7174,"bootTime":1726863452,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:17:05.863630   65164 start.go:139] virtualization: kvm guest
	I0920 22:17:05.866077   65164 out.go:177] * [old-k8s-version-593412] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:17:05.867484   65164 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:17:05.867531   65164 notify.go:220] Checking for updates...
	I0920 22:17:05.869987   65164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:17:05.871483   65164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:17:05.872745   65164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:17:05.874062   65164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:17:05.875089   65164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:17:05.876578   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:17:05.876975   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.877047   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.891847   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I0920 22:17:05.892301   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.892821   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.892841   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.893134   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.893312   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.895266   65164 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 22:17:05.896322   65164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:17:05.896637   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.896671   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.912301   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37681
	I0920 22:17:05.912667   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.913170   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.913189   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.913475   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.913672   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.949827   65164 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 22:17:05.951258   65164 start.go:297] selected driver: kvm2
	I0920 22:17:05.951275   65164 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.951380   65164 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:17:05.952032   65164 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.952098   65164 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 22:17:05.967645   65164 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 22:17:05.968048   65164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:17:05.968078   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:17:05.968122   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:17:05.968161   65164 start.go:340] cluster config:
	{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.968267   65164 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.970080   65164 out.go:177] * Starting "old-k8s-version-593412" primary control-plane node in "old-k8s-version-593412" cluster
	I0920 22:17:03.331236   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:06.403275   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:05.971180   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:17:05.971218   65164 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 22:17:05.971227   65164 cache.go:56] Caching tarball of preloaded images
	I0920 22:17:05.971292   65164 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 22:17:05.971301   65164 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 22:17:05.971394   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:17:05.971564   65164 start.go:360] acquireMachinesLock for old-k8s-version-593412: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:17:12.483250   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:15.555283   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:21.635247   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:24.707272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:30.787265   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:33.859227   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:39.939281   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:43.011250   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:49.091264   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:52.163272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:58.243272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:01.315252   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:07.395256   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:10.467248   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:16.547257   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:19.619309   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:25.699278   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:28.771275   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:34.851234   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:37.923271   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:44.003246   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:47.075274   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:53.155254   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:56.227303   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:02.307307   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:05.379257   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:11.459278   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:14.531267   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:20.611255   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:23.683284   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:29.763270   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:32.835325   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:35.840483   64543 start.go:364] duration metric: took 3m48.294081485s to acquireMachinesLock for "embed-certs-562750"
	I0920 22:19:35.840533   64543 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:19:35.840542   64543 fix.go:54] fixHost starting: 
	I0920 22:19:35.840886   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:19:35.840934   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:19:35.856524   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I0920 22:19:35.856933   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:19:35.857405   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:19:35.857419   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:19:35.857754   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:19:35.857958   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:35.858095   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:19:35.859845   64543 fix.go:112] recreateIfNeeded on embed-certs-562750: state=Stopped err=<nil>
	I0920 22:19:35.859876   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	W0920 22:19:35.860023   64543 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:19:35.861973   64543 out.go:177] * Restarting existing kvm2 VM for "embed-certs-562750" ...
	I0920 22:19:35.863394   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Start
	I0920 22:19:35.863563   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring networks are active...
	I0920 22:19:35.864411   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring network default is active
	I0920 22:19:35.864746   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring network mk-embed-certs-562750 is active
	I0920 22:19:35.865085   64543 main.go:141] libmachine: (embed-certs-562750) Getting domain xml...
	I0920 22:19:35.865728   64543 main.go:141] libmachine: (embed-certs-562750) Creating domain...
	I0920 22:19:37.094096   64543 main.go:141] libmachine: (embed-certs-562750) Waiting to get IP...
	I0920 22:19:37.095056   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.095580   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.095671   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.095567   66127 retry.go:31] will retry after 248.155801ms: waiting for machine to come up
	I0920 22:19:37.345108   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.345577   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.345634   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.345559   66127 retry.go:31] will retry after 340.822818ms: waiting for machine to come up
	I0920 22:19:35.837792   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:19:35.837839   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:19:35.838152   64076 buildroot.go:166] provisioning hostname "no-preload-032884"
	I0920 22:19:35.838177   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:19:35.838385   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:19:35.840336   64076 machine.go:96] duration metric: took 4m37.421385706s to provisionDockerMachine
	I0920 22:19:35.840382   64076 fix.go:56] duration metric: took 4m37.44364588s for fixHost
	I0920 22:19:35.840392   64076 start.go:83] releasing machines lock for "no-preload-032884", held for 4m37.443690605s
	W0920 22:19:35.840414   64076 start.go:714] error starting host: provision: host is not running
	W0920 22:19:35.840512   64076 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 22:19:35.840521   64076 start.go:729] Will try again in 5 seconds ...
	I0920 22:19:37.688209   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.688587   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.688615   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.688534   66127 retry.go:31] will retry after 321.605412ms: waiting for machine to come up
	I0920 22:19:38.012304   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:38.012829   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:38.012853   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:38.012780   66127 retry.go:31] will retry after 475.245269ms: waiting for machine to come up
	I0920 22:19:38.489354   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:38.489733   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:38.489763   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:38.489687   66127 retry.go:31] will retry after 667.725945ms: waiting for machine to come up
	I0920 22:19:39.158494   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:39.158997   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:39.159024   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:39.158941   66127 retry.go:31] will retry after 582.98832ms: waiting for machine to come up
	I0920 22:19:39.743778   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:39.744147   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:39.744184   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:39.744092   66127 retry.go:31] will retry after 849.746424ms: waiting for machine to come up
	I0920 22:19:40.595098   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:40.595489   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:40.595518   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:40.595440   66127 retry.go:31] will retry after 1.349776305s: waiting for machine to come up
	I0920 22:19:41.947028   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:41.947449   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:41.947472   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:41.947422   66127 retry.go:31] will retry after 1.323416137s: waiting for machine to come up
	I0920 22:19:40.840969   64076 start.go:360] acquireMachinesLock for no-preload-032884: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:19:43.272963   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:43.273397   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:43.273424   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:43.273344   66127 retry.go:31] will retry after 1.986214907s: waiting for machine to come up
	I0920 22:19:45.262662   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:45.263148   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:45.263170   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:45.263095   66127 retry.go:31] will retry after 2.68512379s: waiting for machine to come up
	I0920 22:19:47.950652   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:47.951043   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:47.951069   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:47.950977   66127 retry.go:31] will retry after 3.616930237s: waiting for machine to come up
	I0920 22:19:51.569337   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:51.569741   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:51.569774   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:51.569692   66127 retry.go:31] will retry after 2.803702381s: waiting for machine to come up
	I0920 22:19:55.615900   64930 start.go:364] duration metric: took 3m10.214658006s to acquireMachinesLock for "default-k8s-diff-port-799783"
	I0920 22:19:55.615986   64930 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:19:55.615995   64930 fix.go:54] fixHost starting: 
	I0920 22:19:55.616427   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:19:55.616483   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:19:55.636346   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0920 22:19:55.636780   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:19:55.637258   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:19:55.637279   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:19:55.637578   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:19:55.637770   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:19:55.637911   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:19:55.639942   64930 fix.go:112] recreateIfNeeded on default-k8s-diff-port-799783: state=Stopped err=<nil>
	I0920 22:19:55.639973   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	W0920 22:19:55.640144   64930 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:19:55.642492   64930 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-799783" ...
	I0920 22:19:54.376796   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.377333   64543 main.go:141] libmachine: (embed-certs-562750) Found IP for machine: 192.168.50.193
	I0920 22:19:54.377353   64543 main.go:141] libmachine: (embed-certs-562750) Reserving static IP address...
	I0920 22:19:54.377363   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has current primary IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.377737   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "embed-certs-562750", mac: "52:54:00:be:17:82", ip: "192.168.50.193"} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.377768   64543 main.go:141] libmachine: (embed-certs-562750) DBG | skip adding static IP to network mk-embed-certs-562750 - found existing host DHCP lease matching {name: "embed-certs-562750", mac: "52:54:00:be:17:82", ip: "192.168.50.193"}
	I0920 22:19:54.377787   64543 main.go:141] libmachine: (embed-certs-562750) Reserved static IP address: 192.168.50.193
	I0920 22:19:54.377804   64543 main.go:141] libmachine: (embed-certs-562750) Waiting for SSH to be available...
	I0920 22:19:54.377849   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Getting to WaitForSSH function...
	I0920 22:19:54.379913   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.380197   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.380236   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.380325   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Using SSH client type: external
	I0920 22:19:54.380350   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa (-rw-------)
	I0920 22:19:54.380382   64543 main.go:141] libmachine: (embed-certs-562750) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:19:54.380395   64543 main.go:141] libmachine: (embed-certs-562750) DBG | About to run SSH command:
	I0920 22:19:54.380405   64543 main.go:141] libmachine: (embed-certs-562750) DBG | exit 0
	I0920 22:19:54.507262   64543 main.go:141] libmachine: (embed-certs-562750) DBG | SSH cmd err, output: <nil>: 
	I0920 22:19:54.507667   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetConfigRaw
	I0920 22:19:54.508332   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:54.510768   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.511108   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.511146   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.511441   64543 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/config.json ...
	I0920 22:19:54.511672   64543 machine.go:93] provisionDockerMachine start ...
	I0920 22:19:54.511690   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:54.511883   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.514116   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.514496   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.514524   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.514646   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.514794   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.514931   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.515053   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.515206   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.515457   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.515472   64543 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:19:54.623421   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:19:54.623455   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.623694   64543 buildroot.go:166] provisioning hostname "embed-certs-562750"
	I0920 22:19:54.623717   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.623843   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.626435   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.626733   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.626754   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.626924   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.627122   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.627293   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.627436   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.627594   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.627749   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.627760   64543 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-562750 && echo "embed-certs-562750" | sudo tee /etc/hostname
	I0920 22:19:54.750707   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-562750
	
	I0920 22:19:54.750729   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.753239   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.753512   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.753544   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.753764   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.753968   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.754153   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.754312   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.754481   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.754650   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.754666   64543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-562750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-562750/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-562750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:19:54.872899   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:19:54.872925   64543 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:19:54.872961   64543 buildroot.go:174] setting up certificates
	I0920 22:19:54.872971   64543 provision.go:84] configureAuth start
	I0920 22:19:54.872979   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.873237   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:54.875851   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.876280   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.876302   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.876471   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.878729   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.879117   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.879143   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.879320   64543 provision.go:143] copyHostCerts
	I0920 22:19:54.879370   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:19:54.879378   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:19:54.879467   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:19:54.879581   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:19:54.879591   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:19:54.879624   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:19:54.879694   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:19:54.879703   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:19:54.879736   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:19:54.879799   64543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.embed-certs-562750 san=[127.0.0.1 192.168.50.193 embed-certs-562750 localhost minikube]
	I0920 22:19:54.981669   64543 provision.go:177] copyRemoteCerts
	I0920 22:19:54.981737   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:19:54.981762   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.984500   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.984792   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.984814   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.984986   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.985187   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.985305   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.985424   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.069012   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 22:19:55.094417   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:19:55.119895   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:19:55.145399   64543 provision.go:87] duration metric: took 272.417168ms to configureAuth
	I0920 22:19:55.145433   64543 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:19:55.145627   64543 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:19:55.145719   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.148502   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.148866   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.148891   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.149082   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.149272   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.149399   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.149528   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.149637   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:55.149785   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:55.149796   64543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:19:55.374395   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:19:55.374422   64543 machine.go:96] duration metric: took 862.73711ms to provisionDockerMachine
	I0920 22:19:55.374436   64543 start.go:293] postStartSetup for "embed-certs-562750" (driver="kvm2")
	I0920 22:19:55.374448   64543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:19:55.374463   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.374798   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:19:55.374815   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.377262   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.377649   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.377671   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.377856   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.378054   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.378203   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.378327   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.462119   64543 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:19:55.466515   64543 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:19:55.466544   64543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:19:55.466621   64543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:19:55.466720   64543 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:19:55.466846   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:19:55.476703   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:19:55.501943   64543 start.go:296] duration metric: took 127.489738ms for postStartSetup
	I0920 22:19:55.501987   64543 fix.go:56] duration metric: took 19.661445548s for fixHost
	I0920 22:19:55.502009   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.505054   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.505394   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.505420   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.505606   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.505826   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.505969   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.506160   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.506352   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:55.506532   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:55.506541   64543 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:19:55.615752   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870795.575298210
	
	I0920 22:19:55.615773   64543 fix.go:216] guest clock: 1726870795.575298210
	I0920 22:19:55.615781   64543 fix.go:229] Guest: 2024-09-20 22:19:55.57529821 +0000 UTC Remote: 2024-09-20 22:19:55.501991982 +0000 UTC m=+248.100299917 (delta=73.306228ms)
	I0920 22:19:55.615805   64543 fix.go:200] guest clock delta is within tolerance: 73.306228ms
	I0920 22:19:55.615812   64543 start.go:83] releasing machines lock for "embed-certs-562750", held for 19.7753011s
	I0920 22:19:55.615850   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.616131   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:55.618697   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.619117   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.619144   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.619347   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.619859   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.620028   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.620110   64543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:19:55.620162   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.620261   64543 ssh_runner.go:195] Run: cat /version.json
	I0920 22:19:55.620287   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.622796   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623204   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623233   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.623254   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623440   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.623601   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.623660   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.623681   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623769   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.623880   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.623941   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.624046   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.624175   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.624291   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.731482   64543 ssh_runner.go:195] Run: systemctl --version
	I0920 22:19:55.738176   64543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:19:55.887545   64543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:19:55.894532   64543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:19:55.894610   64543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:19:55.911070   64543 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:19:55.911097   64543 start.go:495] detecting cgroup driver to use...
	I0920 22:19:55.911178   64543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:19:55.928676   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:19:55.943462   64543 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:19:55.943543   64543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:19:55.960910   64543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:19:55.978109   64543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:19:56.101434   64543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:19:56.238737   64543 docker.go:233] disabling docker service ...
	I0920 22:19:56.238814   64543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:19:56.254099   64543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:19:56.268926   64543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:19:56.417839   64543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:19:56.548769   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:19:56.563311   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:19:56.583073   64543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:19:56.583134   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.598212   64543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:19:56.598287   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.610151   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.623499   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.637734   64543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:19:56.649751   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.665619   64543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.689725   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.704698   64543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:19:56.716205   64543 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:19:56.716276   64543 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:19:56.734247   64543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:19:56.745725   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:19:56.879084   64543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:19:56.978332   64543 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:19:56.978400   64543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:19:56.985047   64543 start.go:563] Will wait 60s for crictl version
	I0920 22:19:56.985112   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:19:56.989231   64543 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:19:57.029037   64543 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:19:57.029126   64543 ssh_runner.go:195] Run: crio --version
	I0920 22:19:57.060838   64543 ssh_runner.go:195] Run: crio --version
	I0920 22:19:57.092838   64543 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:19:57.094132   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:57.097205   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:57.097597   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:57.097642   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:57.097865   64543 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 22:19:57.102433   64543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:19:57.116015   64543 kubeadm.go:883] updating cluster {Name:embed-certs-562750 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:19:57.116134   64543 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:19:57.116197   64543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:19:57.156317   64543 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:19:57.156380   64543 ssh_runner.go:195] Run: which lz4
	I0920 22:19:57.160502   64543 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:19:57.164982   64543 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:19:57.165024   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 22:19:55.643600   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Start
	I0920 22:19:55.643811   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring networks are active...
	I0920 22:19:55.644651   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring network default is active
	I0920 22:19:55.644941   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring network mk-default-k8s-diff-port-799783 is active
	I0920 22:19:55.645308   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Getting domain xml...
	I0920 22:19:55.645897   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Creating domain...
	I0920 22:19:56.961175   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting to get IP...
	I0920 22:19:56.961986   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:56.962442   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:56.962522   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:56.962429   66271 retry.go:31] will retry after 219.744487ms: waiting for machine to come up
	I0920 22:19:57.184180   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.184718   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.184751   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.184663   66271 retry.go:31] will retry after 331.58082ms: waiting for machine to come up
	I0920 22:19:57.518314   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.518907   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.518945   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.518878   66271 retry.go:31] will retry after 308.110588ms: waiting for machine to come up
	I0920 22:19:57.829213   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.829754   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.829782   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.829711   66271 retry.go:31] will retry after 421.186303ms: waiting for machine to come up
	I0920 22:19:58.252218   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.252662   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.252689   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:58.252618   66271 retry.go:31] will retry after 465.325985ms: waiting for machine to come up
	I0920 22:19:58.719232   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.719741   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.719769   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:58.719679   66271 retry.go:31] will retry after 865.976725ms: waiting for machine to come up
	I0920 22:19:59.587325   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:59.587694   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:59.587725   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:59.587654   66271 retry.go:31] will retry after 1.114189927s: waiting for machine to come up
	I0920 22:19:58.663210   64543 crio.go:462] duration metric: took 1.502738525s to copy over tarball
	I0920 22:19:58.663299   64543 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:00.798877   64543 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.135542137s)
	I0920 22:20:00.798912   64543 crio.go:469] duration metric: took 2.135668164s to extract the tarball
	I0920 22:20:00.798922   64543 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:00.838877   64543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:00.884361   64543 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:20:00.884383   64543 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:20:00.884391   64543 kubeadm.go:934] updating node { 192.168.50.193 8443 v1.31.1 crio true true} ...
	I0920 22:20:00.884492   64543 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-562750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:00.884559   64543 ssh_runner.go:195] Run: crio config
	I0920 22:20:00.930933   64543 cni.go:84] Creating CNI manager for ""
	I0920 22:20:00.930987   64543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:00.931000   64543 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:00.931026   64543 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.193 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-562750 NodeName:embed-certs-562750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:20:00.931179   64543 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-562750"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:00.931237   64543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:20:00.941926   64543 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:00.942020   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:00.952488   64543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0920 22:20:00.970386   64543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:00.987879   64543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0920 22:20:01.006743   64543 ssh_runner.go:195] Run: grep 192.168.50.193	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:01.010865   64543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:01.023730   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:01.151649   64543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:01.169768   64543 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750 for IP: 192.168.50.193
	I0920 22:20:01.169804   64543 certs.go:194] generating shared ca certs ...
	I0920 22:20:01.169823   64543 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:01.170011   64543 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:01.170073   64543 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:01.170083   64543 certs.go:256] generating profile certs ...
	I0920 22:20:01.170195   64543 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/client.key
	I0920 22:20:01.170276   64543 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.key.1624b01a
	I0920 22:20:01.170322   64543 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.key
	I0920 22:20:01.170470   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:01.170511   64543 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:01.170524   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:01.170563   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:01.170597   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:01.170625   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:01.170675   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:01.171652   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:01.235898   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:01.276664   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:01.313610   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:01.346451   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 22:20:01.373805   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:20:01.401908   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:01.428305   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:01.460649   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:01.487664   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:01.513122   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:01.539736   64543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:01.559568   64543 ssh_runner.go:195] Run: openssl version
	I0920 22:20:01.565838   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:01.579013   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.584012   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.584090   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.590609   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:01.603863   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:01.616932   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.621762   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.621816   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.627823   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:01.640444   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:01.654236   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.659030   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.659087   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.665104   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:01.678337   64543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:01.683177   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:01.689492   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:01.695839   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:01.702517   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:01.709021   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:01.715624   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:01.721888   64543 kubeadm.go:392] StartCluster: {Name:embed-certs-562750 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:01.721981   64543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:01.722029   64543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:01.771458   64543 cri.go:89] found id: ""
	I0920 22:20:01.771526   64543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:01.785814   64543 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:01.785833   64543 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:01.785887   64543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:01.796192   64543 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:01.797197   64543 kubeconfig.go:125] found "embed-certs-562750" server: "https://192.168.50.193:8443"
	I0920 22:20:01.799185   64543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:01.809313   64543 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.193
	I0920 22:20:01.809345   64543 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:01.809373   64543 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:01.809437   64543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:01.846530   64543 cri.go:89] found id: ""
	I0920 22:20:01.846605   64543 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:01.864522   64543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:01.874792   64543 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:01.874817   64543 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:01.874877   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:20:01.884330   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:01.884401   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:01.893899   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:20:01.903119   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:01.903176   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:01.913163   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:20:01.924161   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:01.924216   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:01.935472   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:20:01.944837   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:01.944916   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:01.955372   64543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:01.966120   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:02.094557   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:00.703843   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:00.704393   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:00.704421   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:00.704320   66271 retry.go:31] will retry after 1.075959086s: waiting for machine to come up
	I0920 22:20:01.781352   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:01.781785   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:01.781815   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:01.781745   66271 retry.go:31] will retry after 1.355153165s: waiting for machine to come up
	I0920 22:20:03.139404   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:03.139900   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:03.139926   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:03.139858   66271 retry.go:31] will retry after 1.818194289s: waiting for machine to come up
	I0920 22:20:04.959670   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:04.960173   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:04.960242   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:04.960146   66271 retry.go:31] will retry after 2.18132962s: waiting for machine to come up
	I0920 22:20:03.143089   64543 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.048498511s)
	I0920 22:20:03.143125   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.364380   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.431644   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.547063   64543 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:03.547154   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.047320   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.548259   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.577405   64543 api_server.go:72] duration metric: took 1.03034068s to wait for apiserver process to appear ...
	I0920 22:20:04.577433   64543 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:20:04.577456   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.359356   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:07.359397   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:07.359415   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.433696   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:07.433729   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:07.578317   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.605834   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:07.605869   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:08.078218   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:08.083148   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:08.083174   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:08.577638   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:08.583009   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:08.583039   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:09.077549   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:09.081990   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 200:
	ok
	I0920 22:20:09.088232   64543 api_server.go:141] control plane version: v1.31.1
	I0920 22:20:09.088256   64543 api_server.go:131] duration metric: took 4.510816425s to wait for apiserver health ...
	I0920 22:20:09.088263   64543 cni.go:84] Creating CNI manager for ""
	I0920 22:20:09.088271   64543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:09.090388   64543 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:20:07.143268   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:07.143767   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:07.143806   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:07.143720   66271 retry.go:31] will retry after 3.514856392s: waiting for machine to come up
	I0920 22:20:09.091750   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:20:09.103171   64543 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:20:09.123617   64543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:20:09.135750   64543 system_pods.go:59] 8 kube-system pods found
	I0920 22:20:09.135778   64543 system_pods.go:61] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:20:09.135790   64543 system_pods.go:61] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:20:09.135800   64543 system_pods.go:61] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:20:09.135809   64543 system_pods.go:61] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:20:09.135815   64543 system_pods.go:61] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:20:09.135838   64543 system_pods.go:61] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:20:09.135843   64543 system_pods.go:61] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:20:09.135851   64543 system_pods.go:61] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:20:09.135864   64543 system_pods.go:74] duration metric: took 12.228085ms to wait for pod list to return data ...
	I0920 22:20:09.135870   64543 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:20:09.140570   64543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:20:09.140598   64543 node_conditions.go:123] node cpu capacity is 2
	I0920 22:20:09.140609   64543 node_conditions.go:105] duration metric: took 4.732339ms to run NodePressure ...
	I0920 22:20:09.140625   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:09.411017   64543 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:20:09.423634   64543 kubeadm.go:739] kubelet initialised
	I0920 22:20:09.423657   64543 kubeadm.go:740] duration metric: took 12.617105ms waiting for restarted kubelet to initialise ...
	I0920 22:20:09.423665   64543 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:09.427934   64543 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.438355   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.438377   64543 pod_ready.go:82] duration metric: took 10.418416ms for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.438386   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.438403   64543 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.447837   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "etcd-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.447868   64543 pod_ready.go:82] duration metric: took 9.456346ms for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.447877   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "etcd-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.447886   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.456304   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.456328   64543 pod_ready.go:82] duration metric: took 8.434885ms for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.456338   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.456344   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.528428   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.528451   64543 pod_ready.go:82] duration metric: took 72.099227ms for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.528460   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.528465   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.927405   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-proxy-wsrnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.927434   64543 pod_ready.go:82] duration metric: took 398.961394ms for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.927443   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-proxy-wsrnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.927449   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:10.327103   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.327128   64543 pod_ready.go:82] duration metric: took 399.672325ms for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:10.327140   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.327153   64543 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:10.727093   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.727121   64543 pod_ready.go:82] duration metric: took 399.958241ms for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:10.727130   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.727137   64543 pod_ready.go:39] duration metric: took 1.303464561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:10.727154   64543 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:20:10.741332   64543 ops.go:34] apiserver oom_adj: -16
	I0920 22:20:10.741356   64543 kubeadm.go:597] duration metric: took 8.955512402s to restartPrimaryControlPlane
	I0920 22:20:10.741367   64543 kubeadm.go:394] duration metric: took 9.019487096s to StartCluster
	I0920 22:20:10.741382   64543 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:10.741459   64543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:20:10.744903   64543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:10.745155   64543 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:20:10.745198   64543 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:20:10.745344   64543 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-562750"
	I0920 22:20:10.745366   64543 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-562750"
	W0920 22:20:10.745375   64543 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:20:10.745376   64543 addons.go:69] Setting metrics-server=true in profile "embed-certs-562750"
	I0920 22:20:10.745398   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.745397   64543 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:10.745409   64543 addons.go:234] Setting addon metrics-server=true in "embed-certs-562750"
	W0920 22:20:10.745419   64543 addons.go:243] addon metrics-server should already be in state true
	I0920 22:20:10.745344   64543 addons.go:69] Setting default-storageclass=true in profile "embed-certs-562750"
	I0920 22:20:10.745451   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.745468   64543 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-562750"
	I0920 22:20:10.745705   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745734   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745773   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.745853   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745943   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.746025   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.747255   64543 out.go:177] * Verifying Kubernetes components...
	I0920 22:20:10.749042   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:10.761119   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0920 22:20:10.761143   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0920 22:20:10.761143   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0920 22:20:10.761517   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.761598   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.761643   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.762025   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762040   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762150   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762157   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762169   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762172   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762352   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762535   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762545   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762611   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.763136   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.763137   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.763182   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.763208   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.766161   64543 addons.go:234] Setting addon default-storageclass=true in "embed-certs-562750"
	W0920 22:20:10.766184   64543 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:20:10.766215   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.766587   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.766645   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.779678   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
	I0920 22:20:10.780153   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.780716   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.780739   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.781116   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.781184   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45417
	I0920 22:20:10.781406   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.781548   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.782023   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.782037   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.782492   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.782496   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0920 22:20:10.782988   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.783141   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.783174   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.783312   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.783458   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.783479   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.783861   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.784058   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.785625   64543 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:10.785682   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.787202   64543 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:20:10.787263   64543 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:20:10.787276   64543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:20:10.787293   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.788800   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:20:10.788820   64543 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:20:10.788838   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.790560   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.791094   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.791126   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.791400   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.791591   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.791735   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.791873   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.792691   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.793146   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.793161   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.793440   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.793640   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.793776   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.793904   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.801231   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36363
	I0920 22:20:10.801612   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.802055   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.802078   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.802442   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.802720   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.804333   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.804544   64543 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:20:10.804561   64543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:20:10.804574   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.807883   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.808403   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.808425   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.808636   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.808809   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.808949   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.809102   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.974334   64543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:10.997919   64543 node_ready.go:35] waiting up to 6m0s for node "embed-certs-562750" to be "Ready" ...
	I0920 22:20:11.061767   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:20:11.079795   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:20:11.079817   64543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:20:11.097101   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:20:11.144947   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:20:11.144971   64543 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:20:11.217680   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:20:11.217704   64543 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:20:11.269307   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:20:12.132770   64543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070956267s)
	I0920 22:20:12.132827   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.132840   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.132840   64543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.035704985s)
	I0920 22:20:12.132882   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.132898   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133151   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133183   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133202   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133217   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.133221   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133225   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133248   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133262   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133285   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.133296   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133440   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133456   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133466   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133523   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133525   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133534   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.139359   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.139373   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.139612   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.139628   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.139643   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177345   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.177367   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.177669   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177689   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.177705   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.177715   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.177724   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.177933   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177974   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.177984   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.177998   64543 addons.go:475] Verifying addon metrics-server=true in "embed-certs-562750"
	I0920 22:20:12.180228   64543 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 22:20:12.181532   64543 addons.go:510] duration metric: took 1.436346895s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 22:20:15.124170   65164 start.go:364] duration metric: took 3m9.152575828s to acquireMachinesLock for "old-k8s-version-593412"
	I0920 22:20:15.124246   65164 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:20:15.124258   65164 fix.go:54] fixHost starting: 
	I0920 22:20:15.124618   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:15.124669   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:15.144438   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0920 22:20:15.144935   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:15.145417   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:20:15.145446   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:15.145757   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:15.145943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:15.146103   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetState
	I0920 22:20:15.147740   65164 fix.go:112] recreateIfNeeded on old-k8s-version-593412: state=Stopped err=<nil>
	I0920 22:20:15.147770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	W0920 22:20:15.147919   65164 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:20:15.150120   65164 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-593412" ...
	I0920 22:20:10.659920   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:10.660290   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:10.660324   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:10.660245   66271 retry.go:31] will retry after 3.009712634s: waiting for machine to come up
	I0920 22:20:13.672538   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.673101   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has current primary IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.673132   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Found IP for machine: 192.168.39.94
	I0920 22:20:13.673144   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Reserving static IP address...
	I0920 22:20:13.673601   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799783", mac: "52:54:00:30:89:df", ip: "192.168.39.94"} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.673657   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | skip adding static IP to network mk-default-k8s-diff-port-799783 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799783", mac: "52:54:00:30:89:df", ip: "192.168.39.94"}
	I0920 22:20:13.673673   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Reserved static IP address: 192.168.39.94
	I0920 22:20:13.673692   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for SSH to be available...
	I0920 22:20:13.673706   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Getting to WaitForSSH function...
	I0920 22:20:13.675591   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.675941   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.675961   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.676120   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Using SSH client type: external
	I0920 22:20:13.676150   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa (-rw-------)
	I0920 22:20:13.676183   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:13.676199   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | About to run SSH command:
	I0920 22:20:13.676212   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | exit 0
	I0920 22:20:13.803369   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:13.803683   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetConfigRaw
	I0920 22:20:13.804303   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:13.806544   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.806902   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.806932   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.807178   64930 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/config.json ...
	I0920 22:20:13.807442   64930 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:13.807465   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:13.807682   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:13.809783   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.810027   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.810049   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.810221   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:13.810472   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.810628   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.810727   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:13.810875   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:13.811088   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:13.811112   64930 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:13.923218   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:13.923249   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:13.923489   64930 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799783"
	I0920 22:20:13.923512   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:13.923657   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:13.926007   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.926408   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.926439   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.926613   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:13.926775   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.926915   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.927071   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:13.927227   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:13.927389   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:13.927401   64930 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799783 && echo "default-k8s-diff-port-799783" | sudo tee /etc/hostname
	I0920 22:20:14.058656   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799783
	
	I0920 22:20:14.058686   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.061487   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.061914   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.061950   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.062157   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.062352   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.062515   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.062675   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.062840   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:14.063061   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:14.063087   64930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799783/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:14.190755   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:14.190785   64930 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:14.190818   64930 buildroot.go:174] setting up certificates
	I0920 22:20:14.190831   64930 provision.go:84] configureAuth start
	I0920 22:20:14.190848   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:14.191114   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:14.194083   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.194504   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.194533   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.194705   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.197104   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.197415   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.197453   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.197649   64930 provision.go:143] copyHostCerts
	I0920 22:20:14.197710   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:14.197733   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:14.197797   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:14.197911   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:14.197924   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:14.197952   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:14.198025   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:14.198035   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:14.198062   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:14.198128   64930 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799783 san=[127.0.0.1 192.168.39.94 default-k8s-diff-port-799783 localhost minikube]
	I0920 22:20:14.450887   64930 provision.go:177] copyRemoteCerts
	I0920 22:20:14.450953   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:14.451010   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.453547   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.453849   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.453873   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.454064   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.454256   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.454424   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.454547   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:14.541380   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:14.566560   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 22:20:14.596742   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:20:14.629406   64930 provision.go:87] duration metric: took 438.556575ms to configureAuth
	I0920 22:20:14.629438   64930 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:14.629656   64930 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:14.629731   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.632710   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.633129   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.633160   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.633403   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.633638   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.633823   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.633968   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.634151   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:14.634380   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:14.634404   64930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:14.870302   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:14.870327   64930 machine.go:96] duration metric: took 1.062871025s to provisionDockerMachine
	I0920 22:20:14.870338   64930 start.go:293] postStartSetup for "default-k8s-diff-port-799783" (driver="kvm2")
	I0920 22:20:14.870348   64930 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:14.870363   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:14.870675   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:14.870703   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.873574   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.873946   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.873990   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.874110   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.874306   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.874460   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.874604   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:14.963054   64930 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:14.967939   64930 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:14.967973   64930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:14.968056   64930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:14.968155   64930 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:14.968292   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:14.978976   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:15.005989   64930 start.go:296] duration metric: took 135.637475ms for postStartSetup
	I0920 22:20:15.006028   64930 fix.go:56] duration metric: took 19.390035006s for fixHost
	I0920 22:20:15.006046   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.008572   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.008972   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.009003   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.009176   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.009355   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.009516   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.009637   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.009793   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:15.009948   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:15.009958   64930 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:15.124032   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870815.095821045
	
	I0920 22:20:15.124057   64930 fix.go:216] guest clock: 1726870815.095821045
	I0920 22:20:15.124067   64930 fix.go:229] Guest: 2024-09-20 22:20:15.095821045 +0000 UTC Remote: 2024-09-20 22:20:15.006032337 +0000 UTC m=+209.748283721 (delta=89.788708ms)
	I0920 22:20:15.124093   64930 fix.go:200] guest clock delta is within tolerance: 89.788708ms
	I0920 22:20:15.124099   64930 start.go:83] releasing machines lock for "default-k8s-diff-port-799783", held for 19.50814158s
	I0920 22:20:15.124126   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.124385   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:15.127159   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.127522   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.127553   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.127724   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128290   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128438   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128550   64930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:15.128593   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.128689   64930 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:15.128707   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.131485   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.131862   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.131934   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.131963   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.132143   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.132343   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.132377   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.132418   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.132561   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.132624   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.132703   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.132789   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:15.132812   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.132967   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:15.239910   64930 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:15.247781   64930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:15.151593   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .Start
	I0920 22:20:15.151807   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring networks are active...
	I0920 22:20:15.152666   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network default is active
	I0920 22:20:15.153017   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network mk-old-k8s-version-593412 is active
	I0920 22:20:15.153520   65164 main.go:141] libmachine: (old-k8s-version-593412) Getting domain xml...
	I0920 22:20:15.154306   65164 main.go:141] libmachine: (old-k8s-version-593412) Creating domain...
	I0920 22:20:15.405701   64930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:15.413454   64930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:15.413546   64930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:15.431606   64930 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:15.431631   64930 start.go:495] detecting cgroup driver to use...
	I0920 22:20:15.431698   64930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:15.450077   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:15.464156   64930 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:15.464229   64930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:15.483489   64930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:15.498873   64930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:15.614354   64930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:15.817412   64930 docker.go:233] disabling docker service ...
	I0920 22:20:15.817495   64930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:15.833184   64930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:15.849521   64930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:15.997696   64930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:16.138329   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:16.153302   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:16.172896   64930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:20:16.172967   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.184383   64930 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:16.184448   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.196869   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.208295   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.220098   64930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:16.231777   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.243180   64930 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.263881   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.275250   64930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:16.291764   64930 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:16.291839   64930 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:16.307082   64930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:16.318243   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:16.459321   64930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:16.563304   64930 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:16.563369   64930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:16.568520   64930 start.go:563] Will wait 60s for crictl version
	I0920 22:20:16.568586   64930 ssh_runner.go:195] Run: which crictl
	I0920 22:20:16.573573   64930 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:16.617596   64930 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:16.617692   64930 ssh_runner.go:195] Run: crio --version
	I0920 22:20:16.656700   64930 ssh_runner.go:195] Run: crio --version
	I0920 22:20:16.691226   64930 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:20:13.001784   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:15.502542   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:16.692645   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:16.695792   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:16.696162   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:16.696189   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:16.696457   64930 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:16.700827   64930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:16.713577   64930 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-799783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:16.713724   64930 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:20:16.713798   64930 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:16.753538   64930 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:20:16.753621   64930 ssh_runner.go:195] Run: which lz4
	I0920 22:20:16.758261   64930 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:20:16.763234   64930 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:20:16.763282   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 22:20:18.277256   64930 crio.go:462] duration metric: took 1.519010373s to copy over tarball
	I0920 22:20:18.277326   64930 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:16.512533   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting to get IP...
	I0920 22:20:16.513609   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.514030   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.514105   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.514023   66497 retry.go:31] will retry after 283.450738ms: waiting for machine to come up
	I0920 22:20:16.799590   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.800093   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.800125   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.800032   66497 retry.go:31] will retry after 325.332719ms: waiting for machine to come up
	I0920 22:20:17.127313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.127784   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.127809   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.127762   66497 retry.go:31] will retry after 407.300726ms: waiting for machine to come up
	I0920 22:20:17.537352   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.537853   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.537884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.537804   66497 retry.go:31] will retry after 376.483945ms: waiting for machine to come up
	I0920 22:20:17.916227   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.916679   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.916706   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.916646   66497 retry.go:31] will retry after 471.842767ms: waiting for machine to come up
	I0920 22:20:18.390326   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:18.390817   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:18.390862   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:18.390788   66497 retry.go:31] will retry after 897.259517ms: waiting for machine to come up
	I0920 22:20:19.289834   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:19.290347   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:19.290376   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:19.290289   66497 retry.go:31] will retry after 1.172691106s: waiting for machine to come up
	I0920 22:20:20.464050   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:20.464487   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:20.464509   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:20.464443   66497 retry.go:31] will retry after 1.205371228s: waiting for machine to come up
	I0920 22:20:17.502934   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:18.003595   64543 node_ready.go:49] node "embed-certs-562750" has status "Ready":"True"
	I0920 22:20:18.003623   64543 node_ready.go:38] duration metric: took 7.005667163s for node "embed-certs-562750" to be "Ready" ...
	I0920 22:20:18.003635   64543 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:18.018682   64543 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:20.027253   64543 pod_ready.go:103] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:20.404573   64930 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127218766s)
	I0920 22:20:20.404602   64930 crio.go:469] duration metric: took 2.127314313s to extract the tarball
	I0920 22:20:20.404613   64930 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:20.443060   64930 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:20.486174   64930 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:20:20.486200   64930 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:20:20.486210   64930 kubeadm.go:934] updating node { 192.168.39.94 8444 v1.31.1 crio true true} ...
	I0920 22:20:20.486327   64930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-799783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:20.486422   64930 ssh_runner.go:195] Run: crio config
	I0920 22:20:20.539117   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:20:20.539141   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:20.539151   64930 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:20.539184   64930 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-799783 NodeName:default-k8s-diff-port-799783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:20:20.539375   64930 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-799783"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:20.539450   64930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:20:20.552286   64930 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:20.552358   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:20.563964   64930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0920 22:20:20.582857   64930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:20.601602   64930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0920 22:20:20.619624   64930 ssh_runner.go:195] Run: grep 192.168.39.94	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:20.623667   64930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:20.636950   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:20.758158   64930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:20.776263   64930 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783 for IP: 192.168.39.94
	I0920 22:20:20.776291   64930 certs.go:194] generating shared ca certs ...
	I0920 22:20:20.776307   64930 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:20.776465   64930 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:20.776529   64930 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:20.776543   64930 certs.go:256] generating profile certs ...
	I0920 22:20:20.776644   64930 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.key
	I0920 22:20:20.776722   64930 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.key.a6c75af2
	I0920 22:20:20.776787   64930 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.key
	I0920 22:20:20.776952   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:20.776999   64930 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:20.777013   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:20.777050   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:20.777083   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:20.777121   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:20.777174   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:20.777796   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:20.829458   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:20.879008   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:20.914899   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:20.945905   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 22:20:20.981144   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:20:21.011815   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:21.037214   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:21.062338   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:21.088458   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:21.115033   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:21.138611   64930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:21.156547   64930 ssh_runner.go:195] Run: openssl version
	I0920 22:20:21.162710   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:21.175725   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.180671   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.180733   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.186897   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:21.199980   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:21.212840   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.217656   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.217710   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.223561   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:21.236627   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:21.249706   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.254627   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.254693   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.260555   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:21.272330   64930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:21.277146   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:21.283637   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:21.289555   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:21.296395   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:21.302879   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:21.309163   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:21.315778   64930 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-799783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:21.315890   64930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:21.315953   64930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:21.358833   64930 cri.go:89] found id: ""
	I0920 22:20:21.358895   64930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:21.370044   64930 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:21.370067   64930 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:21.370110   64930 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:21.380782   64930 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:21.381858   64930 kubeconfig.go:125] found "default-k8s-diff-port-799783" server: "https://192.168.39.94:8444"
	I0920 22:20:21.383864   64930 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:21.394623   64930 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.94
	I0920 22:20:21.394659   64930 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:21.394692   64930 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:21.394781   64930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:21.434763   64930 cri.go:89] found id: ""
	I0920 22:20:21.434837   64930 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:21.452364   64930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:21.463047   64930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:21.463069   64930 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:21.463120   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 22:20:21.473382   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:21.473435   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:21.483948   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 22:20:21.494051   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:21.494116   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:21.504392   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 22:20:21.514513   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:21.514586   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:21.525767   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 22:20:21.535959   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:21.536042   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:21.546544   64930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:21.559961   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:21.700177   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.039526   64930 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.339307973s)
	I0920 22:20:23.039563   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.240821   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.322946   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.419008   64930 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:23.419163   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:23.919775   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:24.419574   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:24.458386   64930 api_server.go:72] duration metric: took 1.039374058s to wait for apiserver process to appear ...
	I0920 22:20:24.458431   64930 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:20:24.458457   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:24.459045   64930 api_server.go:269] stopped: https://192.168.39.94:8444/healthz: Get "https://192.168.39.94:8444/healthz": dial tcp 192.168.39.94:8444: connect: connection refused
	I0920 22:20:24.958794   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:21.671904   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:21.672382   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:21.672412   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:21.672336   66497 retry.go:31] will retry after 1.261816036s: waiting for machine to come up
	I0920 22:20:22.935576   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:22.936165   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:22.936194   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:22.936101   66497 retry.go:31] will retry after 2.160717711s: waiting for machine to come up
	I0920 22:20:25.098788   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:25.099242   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:25.099264   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:25.099201   66497 retry.go:31] will retry after 2.399420445s: waiting for machine to come up
	I0920 22:20:22.764833   64543 pod_ready.go:103] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:25.030167   64543 pod_ready.go:93] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.030195   64543 pod_ready.go:82] duration metric: took 7.011481892s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.030207   64543 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.039905   64543 pod_ready.go:93] pod "etcd-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.039987   64543 pod_ready.go:82] duration metric: took 9.771571ms for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.040012   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.049855   64543 pod_ready.go:93] pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.049880   64543 pod_ready.go:82] duration metric: took 9.849676ms for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.049891   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.056105   64543 pod_ready.go:93] pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.056128   64543 pod_ready.go:82] duration metric: took 6.228132ms for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.056139   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.061206   64543 pod_ready.go:93] pod "kube-proxy-wsrnw" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.061227   64543 pod_ready.go:82] duration metric: took 5.080531ms for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.061239   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.423902   64543 pod_ready.go:93] pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.423928   64543 pod_ready.go:82] duration metric: took 362.681392ms for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.423941   64543 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:27.432293   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:27.551115   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:27.551160   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:27.551182   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:27.582342   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:27.582378   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:27.958626   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:27.964667   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:27.964700   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:28.459321   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:28.463795   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:28.463829   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:28.959398   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:28.965916   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 200:
	ok
	I0920 22:20:28.978561   64930 api_server.go:141] control plane version: v1.31.1
	I0920 22:20:28.978588   64930 api_server.go:131] duration metric: took 4.520149944s to wait for apiserver health ...
	I0920 22:20:28.978597   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:20:28.978612   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:28.980359   64930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:20:28.981786   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:20:28.999040   64930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:20:29.044999   64930 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:20:29.061533   64930 system_pods.go:59] 8 kube-system pods found
	I0920 22:20:29.061578   64930 system_pods.go:61] "coredns-7c65d6cfc9-nl65c" [a5d6f5c6-aeef-4f0e-8a96-dcddc7d261d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:20:29.061592   64930 system_pods.go:61] "etcd-default-k8s-diff-port-799783" [e23bc34a-f559-434e-bc61-30f010efe60b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:20:29.061603   64930 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799783" [acbaa020-f0e3-4e35-a4ab-28de39620eee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:20:29.061610   64930 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799783" [4f08a0c8-9f92-44ba-a0ae-3f61f50bad54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:20:29.061620   64930 system_pods.go:61] "kube-proxy-mqkfs" [8fe12e1f-7fdc-4bfc-8d11-3ef08212c17c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 22:20:29.061633   64930 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799783" [8373aa94-7b16-4f5a-ab26-bf778e76cecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:20:29.061641   64930 system_pods.go:61] "metrics-server-6867b74b74-4xsdw" [663691e2-a038-4ba4-b0ee-d7e1e88b9c05] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:20:29.061652   64930 system_pods.go:61] "storage-provisioner" [95a6ef93-5fdd-465b-8d34-7b97d56af757] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 22:20:29.061662   64930 system_pods.go:74] duration metric: took 16.629791ms to wait for pod list to return data ...
	I0920 22:20:29.061674   64930 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:20:29.066297   64930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:20:29.066329   64930 node_conditions.go:123] node cpu capacity is 2
	I0920 22:20:29.066341   64930 node_conditions.go:105] duration metric: took 4.659117ms to run NodePressure ...
	I0920 22:20:29.066360   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:29.337598   64930 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:20:29.345606   64930 kubeadm.go:739] kubelet initialised
	I0920 22:20:29.345637   64930 kubeadm.go:740] duration metric: took 8.0125ms waiting for restarted kubelet to initialise ...
	I0920 22:20:29.345646   64930 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:29.353710   64930 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.359468   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.359507   64930 pod_ready.go:82] duration metric: took 5.763208ms for pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.359518   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.359529   64930 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.365960   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.365994   64930 pod_ready.go:82] duration metric: took 6.454066ms for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.366006   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.366023   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.372228   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.372258   64930 pod_ready.go:82] duration metric: took 6.225774ms for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.372270   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.372278   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.449937   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.449964   64930 pod_ready.go:82] duration metric: took 77.677417ms for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.449975   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.449981   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mqkfs" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.849933   64930 pod_ready.go:93] pod "kube-proxy-mqkfs" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:29.849957   64930 pod_ready.go:82] duration metric: took 399.969442ms for pod "kube-proxy-mqkfs" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.849971   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:27.500734   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:27.501210   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:27.501250   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:27.501187   66497 retry.go:31] will retry after 2.779302945s: waiting for machine to come up
	I0920 22:20:30.282179   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:30.282663   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:30.282693   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:30.282632   66497 retry.go:31] will retry after 3.57604454s: waiting for machine to come up
	I0920 22:20:29.930268   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:31.930604   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:35.104095   64076 start.go:364] duration metric: took 54.263062742s to acquireMachinesLock for "no-preload-032884"
	I0920 22:20:35.104165   64076 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:20:35.104176   64076 fix.go:54] fixHost starting: 
	I0920 22:20:35.104530   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:35.104559   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:35.121905   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0920 22:20:35.122295   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:35.122754   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:20:35.122775   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:35.123184   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:35.123393   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:35.123553   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:20:35.125092   64076 fix.go:112] recreateIfNeeded on no-preload-032884: state=Stopped err=<nil>
	I0920 22:20:35.125113   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	W0920 22:20:35.125267   64076 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:20:35.127162   64076 out.go:177] * Restarting existing kvm2 VM for "no-preload-032884" ...
	I0920 22:20:31.857961   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:34.358409   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:33.859962   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860480   65164 main.go:141] libmachine: (old-k8s-version-593412) Found IP for machine: 192.168.61.2
	I0920 22:20:33.860510   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has current primary IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860519   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserving static IP address...
	I0920 22:20:33.860912   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.860939   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | skip adding static IP to network mk-old-k8s-version-593412 - found existing host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"}
	I0920 22:20:33.860953   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserved static IP address: 192.168.61.2
	I0920 22:20:33.860970   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting for SSH to be available...
	I0920 22:20:33.860982   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Getting to WaitForSSH function...
	I0920 22:20:33.863316   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863665   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.863690   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863814   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH client type: external
	I0920 22:20:33.863852   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa (-rw-------)
	I0920 22:20:33.863884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:33.863898   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | About to run SSH command:
	I0920 22:20:33.863916   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | exit 0
	I0920 22:20:33.987115   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:33.987448   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetConfigRaw
	I0920 22:20:33.988170   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:33.990787   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991200   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.991232   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991450   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:20:33.991659   65164 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:33.991677   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:33.991880   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:33.994313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994683   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.994715   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994911   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:33.995094   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995194   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995344   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:33.995459   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:33.995641   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:33.995651   65164 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:34.104874   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:34.104900   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105125   65164 buildroot.go:166] provisioning hostname "old-k8s-version-593412"
	I0920 22:20:34.105140   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105271   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.108213   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108564   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.108595   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108754   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.108946   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109088   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109218   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.109384   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.109607   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.109624   65164 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-593412 && echo "old-k8s-version-593412" | sudo tee /etc/hostname
	I0920 22:20:34.236515   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-593412
	
	I0920 22:20:34.236547   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.239440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.239825   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239983   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.240211   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240372   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240534   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.240753   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.240978   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.241002   65164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-593412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-593412/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-593412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:34.364324   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:34.364351   65164 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:34.364401   65164 buildroot.go:174] setting up certificates
	I0920 22:20:34.364413   65164 provision.go:84] configureAuth start
	I0920 22:20:34.364426   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.364641   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:34.367650   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368065   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.368123   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368260   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.371062   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.371470   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371621   65164 provision.go:143] copyHostCerts
	I0920 22:20:34.371684   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:34.371699   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:34.371756   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:34.371859   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:34.371869   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:34.371889   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:34.371941   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:34.371949   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:34.371966   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:34.372024   65164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-593412 san=[127.0.0.1 192.168.61.2 localhost minikube old-k8s-version-593412]
	I0920 22:20:34.456453   65164 provision.go:177] copyRemoteCerts
	I0920 22:20:34.456512   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:34.456538   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.459471   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459810   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.459871   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459996   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.460225   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.460390   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.460538   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.547105   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:34.576231   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 22:20:34.603240   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 22:20:34.629295   65164 provision.go:87] duration metric: took 264.869688ms to configureAuth
	I0920 22:20:34.629320   65164 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:34.629521   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:20:34.629601   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.632577   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633006   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.633035   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633191   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.633391   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633552   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633682   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.633865   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.634029   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.634043   65164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:34.870369   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:34.870393   65164 machine.go:96] duration metric: took 878.72152ms to provisionDockerMachine
	I0920 22:20:34.870404   65164 start.go:293] postStartSetup for "old-k8s-version-593412" (driver="kvm2")
	I0920 22:20:34.870414   65164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:34.870446   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:34.870739   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:34.870770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.873626   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874069   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.874098   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874262   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.874477   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.874607   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.874731   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.958510   65164 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:34.962678   65164 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:34.962700   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:34.962769   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:34.962873   65164 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:34.963010   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:34.973114   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:34.997799   65164 start.go:296] duration metric: took 127.379618ms for postStartSetup
	I0920 22:20:34.997837   65164 fix.go:56] duration metric: took 19.87358027s for fixHost
	I0920 22:20:34.997861   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.000420   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000761   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.000786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000953   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.001142   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001332   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001492   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.001694   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:35.001900   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:35.001914   65164 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:35.103956   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870835.076325315
	
	I0920 22:20:35.103980   65164 fix.go:216] guest clock: 1726870835.076325315
	I0920 22:20:35.103991   65164 fix.go:229] Guest: 2024-09-20 22:20:35.076325315 +0000 UTC Remote: 2024-09-20 22:20:34.997841315 +0000 UTC m=+209.171100575 (delta=78.484ms)
	I0920 22:20:35.104021   65164 fix.go:200] guest clock delta is within tolerance: 78.484ms
	I0920 22:20:35.104031   65164 start.go:83] releasing machines lock for "old-k8s-version-593412", held for 19.979809523s
	I0920 22:20:35.104060   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.104327   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:35.107075   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107426   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.107463   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107619   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108087   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108221   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108323   65164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:35.108377   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.108430   65164 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:35.108455   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.111005   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111107   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111377   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111413   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111456   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111584   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111638   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111760   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111795   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111939   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.111943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.189592   65164 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:35.213527   65164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:35.364081   65164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:35.370551   65164 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:35.370621   65164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:35.388107   65164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:35.388135   65164 start.go:495] detecting cgroup driver to use...
	I0920 22:20:35.388206   65164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:35.410531   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:35.425842   65164 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:35.425902   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:35.441753   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:35.457160   65164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:35.578452   65164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:35.754567   65164 docker.go:233] disabling docker service ...
	I0920 22:20:35.754653   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:35.772241   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:35.788874   65164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:35.911696   65164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:36.050305   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:36.065694   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:36.085619   65164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 22:20:36.085697   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.096527   65164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:36.096599   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.108150   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.121282   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.133335   65164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:36.145549   65164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:36.156370   65164 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:36.156429   65164 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:36.174821   65164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:36.188579   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:36.355272   65164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:36.460222   65164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:36.460280   65164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:36.466413   65164 start.go:563] Will wait 60s for crictl version
	I0920 22:20:36.466470   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:36.476263   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:36.519969   65164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:36.520061   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.551914   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.584210   65164 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 22:20:33.931343   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:36.430359   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:35.128483   64076 main.go:141] libmachine: (no-preload-032884) Calling .Start
	I0920 22:20:35.128679   64076 main.go:141] libmachine: (no-preload-032884) Ensuring networks are active...
	I0920 22:20:35.129407   64076 main.go:141] libmachine: (no-preload-032884) Ensuring network default is active
	I0920 22:20:35.129723   64076 main.go:141] libmachine: (no-preload-032884) Ensuring network mk-no-preload-032884 is active
	I0920 22:20:35.130101   64076 main.go:141] libmachine: (no-preload-032884) Getting domain xml...
	I0920 22:20:35.130878   64076 main.go:141] libmachine: (no-preload-032884) Creating domain...
	I0920 22:20:36.466500   64076 main.go:141] libmachine: (no-preload-032884) Waiting to get IP...
	I0920 22:20:36.467609   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:36.468084   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:36.468162   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:36.468053   66666 retry.go:31] will retry after 296.142224ms: waiting for machine to come up
	I0920 22:20:36.765402   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:36.765985   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:36.766019   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:36.765907   66666 retry.go:31] will retry after 264.44324ms: waiting for machine to come up
	I0920 22:20:37.032449   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.032913   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.032983   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.032871   66666 retry.go:31] will retry after 366.351809ms: waiting for machine to come up
	I0920 22:20:37.401339   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.401912   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.401942   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.401867   66666 retry.go:31] will retry after 447.432413ms: waiting for machine to come up
	I0920 22:20:37.851659   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.852267   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.852292   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.852177   66666 retry.go:31] will retry after 547.737945ms: waiting for machine to come up
	I0920 22:20:36.858146   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:38.863304   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:36.585309   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:36.588169   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588579   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:36.588608   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588830   65164 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:36.593202   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:36.608192   65164 kubeadm.go:883] updating cluster {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:36.608325   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:20:36.608384   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:36.667481   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:36.667551   65164 ssh_runner.go:195] Run: which lz4
	I0920 22:20:36.672136   65164 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:20:36.677043   65164 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:20:36.677075   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 22:20:38.446678   65164 crio.go:462] duration metric: took 1.774616281s to copy over tarball
	I0920 22:20:38.446757   65164 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:38.433307   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:40.932109   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:38.401406   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:38.401845   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:38.401872   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:38.401813   66666 retry.go:31] will retry after 768.751475ms: waiting for machine to come up
	I0920 22:20:39.171763   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:39.172262   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:39.172306   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:39.172196   66666 retry.go:31] will retry after 743.411926ms: waiting for machine to come up
	I0920 22:20:39.917169   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:39.917764   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:39.917788   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:39.917713   66666 retry.go:31] will retry after 962.106927ms: waiting for machine to come up
	I0920 22:20:40.881256   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:40.881826   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:40.881853   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:40.881761   66666 retry.go:31] will retry after 1.153833019s: waiting for machine to come up
	I0920 22:20:42.036986   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:42.037484   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:42.037507   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:42.037435   66666 retry.go:31] will retry after 2.265359794s: waiting for machine to come up
	I0920 22:20:41.358624   64930 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:41.358656   64930 pod_ready.go:82] duration metric: took 11.508676383s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:41.358675   64930 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:43.367487   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:41.624103   65164 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177314792s)
	I0920 22:20:41.624140   65164 crio.go:469] duration metric: took 3.177430277s to extract the tarball
	I0920 22:20:41.624150   65164 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:41.682195   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:41.728925   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:41.728954   65164 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:20:41.729037   65164 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.729082   65164 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 22:20:41.729130   65164 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.729039   65164 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.729053   65164 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.729064   65164 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731145   65164 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.731168   65164 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 22:20:41.731417   65164 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731618   65164 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.731649   65164 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.731764   65164 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.731827   65164 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.731883   65164 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.894612   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.895771   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.903674   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.909467   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.910830   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.926607   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.943477   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 22:20:42.011191   65164 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 22:20:42.011242   65164 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.011318   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.050009   65164 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 22:20:42.050293   65164 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.050375   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.093205   65164 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 22:20:42.093259   65164 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.093317   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106609   65164 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 22:20:42.106652   65164 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.106701   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106747   65164 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 22:20:42.106832   65164 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.106881   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106699   65164 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 22:20:42.106936   65164 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.107028   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117272   65164 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 22:20:42.117381   65164 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 22:20:42.117404   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.117463   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.117496   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.117427   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117331   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.117556   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.117617   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.261915   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.262025   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.261930   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.262132   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.262186   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.262224   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.262299   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.416018   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.416073   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.416134   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.437099   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.437281   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.437389   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.437543   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.498530   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:42.617564   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 22:20:42.617565   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 22:20:42.617665   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.617767   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 22:20:42.629455   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 22:20:42.629553   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 22:20:42.629638   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 22:20:42.769494   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 22:20:42.769571   65164 cache_images.go:92] duration metric: took 1.040600304s to LoadCachedImages
	W0920 22:20:42.769670   65164 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0920 22:20:42.769690   65164 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.20.0 crio true true} ...
	I0920 22:20:42.769834   65164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-593412 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:42.769930   65164 ssh_runner.go:195] Run: crio config
	I0920 22:20:42.826657   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:20:42.826684   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:42.826695   65164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:42.826713   65164 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-593412 NodeName:old-k8s-version-593412 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 22:20:42.826912   65164 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-593412"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:42.827017   65164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 22:20:42.836906   65164 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:42.836972   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:42.846776   65164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0920 22:20:42.867946   65164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:42.888646   65164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0920 22:20:42.908587   65164 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:42.913022   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:42.927492   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:43.074355   65164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:43.093712   65164 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412 for IP: 192.168.61.2
	I0920 22:20:43.093747   65164 certs.go:194] generating shared ca certs ...
	I0920 22:20:43.093768   65164 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.093938   65164 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:43.094003   65164 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:43.094016   65164 certs.go:256] generating profile certs ...
	I0920 22:20:43.094116   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.key
	I0920 22:20:43.094174   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key.2074c1da
	I0920 22:20:43.094287   65164 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key
	I0920 22:20:43.094440   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:43.094482   65164 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:43.094503   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:43.094532   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:43.094556   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:43.094575   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:43.094615   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:43.095441   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:43.139315   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:43.184809   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:43.224959   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:43.261692   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 22:20:43.304910   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:20:43.350211   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:43.383294   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:43.419400   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:43.450334   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:43.483749   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:43.514957   65164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:43.539734   65164 ssh_runner.go:195] Run: openssl version
	I0920 22:20:43.548836   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:43.564773   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571841   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571904   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.580432   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:43.596782   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:43.613747   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620172   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620241   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.628316   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:43.644084   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:43.656062   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661620   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661687   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.668857   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:43.681747   65164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:43.686842   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:43.694119   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:43.701308   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:43.710152   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:43.718080   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:43.725573   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:43.733563   65164 kubeadm.go:392] StartCluster: {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:43.733676   65164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:43.733753   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.779321   65164 cri.go:89] found id: ""
	I0920 22:20:43.779414   65164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:43.792290   65164 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:43.792308   65164 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:43.792354   65164 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:43.804754   65164 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:43.805793   65164 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-593412" does not appear in /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:20:43.807347   65164 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-9614/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-593412" cluster setting kubeconfig missing "old-k8s-version-593412" context setting]
	I0920 22:20:43.809671   65164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.812323   65164 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:43.824013   65164 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I0920 22:20:43.824057   65164 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:43.824072   65164 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:43.824154   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.866570   65164 cri.go:89] found id: ""
	I0920 22:20:43.866647   65164 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:43.885113   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:43.895941   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:43.895964   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:43.896070   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:20:43.906350   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:43.906427   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:43.917506   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:20:43.927802   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:43.927873   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:43.938454   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.949394   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:43.949459   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.960239   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:20:43.971275   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:43.971343   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:43.981773   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:43.992415   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:44.128979   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.239452   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.110430813s)
	I0920 22:20:45.239495   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.508589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.628021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.731122   65164 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:45.731219   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:42.960716   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:45.431334   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:47.431981   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:44.306025   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:44.306388   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:44.306418   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:44.306362   66666 retry.go:31] will retry after 1.811972294s: waiting for machine to come up
	I0920 22:20:46.119688   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:46.120257   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:46.120281   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:46.120206   66666 retry.go:31] will retry after 2.353824362s: waiting for machine to come up
	I0920 22:20:45.865875   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:48.366477   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:46.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:46.732091   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.232177   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.731967   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.231957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.731560   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.232211   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.731851   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.732174   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.930717   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:51.931624   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:48.476137   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:48.476630   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:48.476659   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:48.476585   66666 retry.go:31] will retry after 3.920442887s: waiting for machine to come up
	I0920 22:20:52.400058   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:52.400449   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:52.400471   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:52.400426   66666 retry.go:31] will retry after 3.798500794s: waiting for machine to come up
	I0920 22:20:50.865475   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:53.366485   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:51.231352   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:51.731345   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.231745   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.731622   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.231749   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.231457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.732036   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.232322   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.732142   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.430801   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.931991   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.200060   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.200518   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has current primary IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.200541   64076 main.go:141] libmachine: (no-preload-032884) Found IP for machine: 192.168.72.19
	I0920 22:20:56.200550   64076 main.go:141] libmachine: (no-preload-032884) Reserving static IP address...
	I0920 22:20:56.200939   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "no-preload-032884", mac: "52:54:00:a6:6f:3c", ip: "192.168.72.19"} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.200983   64076 main.go:141] libmachine: (no-preload-032884) DBG | skip adding static IP to network mk-no-preload-032884 - found existing host DHCP lease matching {name: "no-preload-032884", mac: "52:54:00:a6:6f:3c", ip: "192.168.72.19"}
	I0920 22:20:56.200999   64076 main.go:141] libmachine: (no-preload-032884) Reserved static IP address: 192.168.72.19
	I0920 22:20:56.201014   64076 main.go:141] libmachine: (no-preload-032884) Waiting for SSH to be available...
	I0920 22:20:56.201042   64076 main.go:141] libmachine: (no-preload-032884) DBG | Getting to WaitForSSH function...
	I0920 22:20:56.203351   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.203663   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.203695   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.203770   64076 main.go:141] libmachine: (no-preload-032884) DBG | Using SSH client type: external
	I0920 22:20:56.203790   64076 main.go:141] libmachine: (no-preload-032884) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa (-rw-------)
	I0920 22:20:56.203820   64076 main.go:141] libmachine: (no-preload-032884) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:56.203830   64076 main.go:141] libmachine: (no-preload-032884) DBG | About to run SSH command:
	I0920 22:20:56.203844   64076 main.go:141] libmachine: (no-preload-032884) DBG | exit 0
	I0920 22:20:56.331498   64076 main.go:141] libmachine: (no-preload-032884) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:56.331905   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetConfigRaw
	I0920 22:20:56.333047   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:56.335830   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.336198   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.336230   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.336462   64076 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/config.json ...
	I0920 22:20:56.336687   64076 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:56.336708   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:56.336901   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.339784   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.340171   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.340198   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.340373   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.340598   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.340754   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.340901   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.341056   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.341274   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.341295   64076 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:56.451538   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:56.451567   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.451783   64076 buildroot.go:166] provisioning hostname "no-preload-032884"
	I0920 22:20:56.451813   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.451994   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.454378   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.454747   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.454777   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.454905   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.455093   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.455216   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.455344   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.455549   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.455768   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.455791   64076 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-032884 && echo "no-preload-032884" | sudo tee /etc/hostname
	I0920 22:20:56.577189   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-032884
	
	I0920 22:20:56.577218   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.580006   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.580396   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.580419   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.580637   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.580821   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.580981   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.581093   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.581236   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.581447   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.581472   64076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-032884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-032884/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-032884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:56.697182   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:56.697211   64076 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:56.697237   64076 buildroot.go:174] setting up certificates
	I0920 22:20:56.697250   64076 provision.go:84] configureAuth start
	I0920 22:20:56.697268   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.697530   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:56.700284   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.700638   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.700669   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.700828   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.703088   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.703460   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.703490   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.703557   64076 provision.go:143] copyHostCerts
	I0920 22:20:56.703638   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:56.703658   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:56.703729   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:56.703846   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:56.703856   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:56.703888   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:56.703976   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:56.703986   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:56.704014   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:56.704100   64076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.no-preload-032884 san=[127.0.0.1 192.168.72.19 localhost minikube no-preload-032884]
	I0920 22:20:56.866984   64076 provision.go:177] copyRemoteCerts
	I0920 22:20:56.867061   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:56.867092   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.869468   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.869727   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.869753   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.869967   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.870146   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.870335   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.870460   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:56.957699   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 22:20:56.983450   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:20:57.010074   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:57.036805   64076 provision.go:87] duration metric: took 339.53481ms to configureAuth
	I0920 22:20:57.036831   64076 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:57.036994   64076 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:57.037101   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.040022   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.040443   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.040474   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.040688   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.040870   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.041024   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.041155   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.041351   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:57.041575   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:57.041597   64076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:57.264274   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:57.264302   64076 machine.go:96] duration metric: took 927.599257ms to provisionDockerMachine
	I0920 22:20:57.264315   64076 start.go:293] postStartSetup for "no-preload-032884" (driver="kvm2")
	I0920 22:20:57.264328   64076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:57.264353   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.264656   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:57.264689   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.267783   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.268229   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.268267   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.268455   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.268639   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.268786   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.268927   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.353832   64076 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:57.358202   64076 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:57.358226   64076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:57.358289   64076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:57.358364   64076 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:57.358445   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:57.368387   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:57.393307   64076 start.go:296] duration metric: took 128.976035ms for postStartSetup
	I0920 22:20:57.393353   64076 fix.go:56] duration metric: took 22.289177312s for fixHost
	I0920 22:20:57.393377   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.396200   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.396566   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.396586   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.396760   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.396953   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.397106   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.397220   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.397357   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:57.397563   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:57.397578   64076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:57.508079   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870857.462716254
	
	I0920 22:20:57.508102   64076 fix.go:216] guest clock: 1726870857.462716254
	I0920 22:20:57.508111   64076 fix.go:229] Guest: 2024-09-20 22:20:57.462716254 +0000 UTC Remote: 2024-09-20 22:20:57.393358015 +0000 UTC m=+359.143310428 (delta=69.358239ms)
	I0920 22:20:57.508136   64076 fix.go:200] guest clock delta is within tolerance: 69.358239ms
	I0920 22:20:57.508143   64076 start.go:83] releasing machines lock for "no-preload-032884", held for 22.404002843s
	I0920 22:20:57.508179   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.508481   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:57.511316   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.511660   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.511685   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.511832   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512291   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512438   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512499   64076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:57.512558   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.512687   64076 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:57.512714   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.515494   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515674   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515884   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.515921   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515962   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.516002   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.516189   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.516287   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.516370   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.516459   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.516509   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.516578   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.516639   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.516667   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.596100   64076 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:57.621437   64076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:57.765920   64076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:57.772259   64076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:57.772330   64076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:57.790230   64076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:57.790257   64076 start.go:495] detecting cgroup driver to use...
	I0920 22:20:57.790341   64076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:57.808916   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:57.824120   64076 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:57.824175   64076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:57.838655   64076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:57.853621   64076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:57.966670   64076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:58.108386   64076 docker.go:233] disabling docker service ...
	I0920 22:20:58.108456   64076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:58.124306   64076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:58.138341   64076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:58.292771   64076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:58.415026   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:58.431905   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:58.451166   64076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:20:58.451227   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.462234   64076 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:58.462319   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.473548   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.484592   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.495498   64076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:58.506876   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.517776   64076 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.536223   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.548104   64076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:58.558182   64076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:58.558258   64076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:58.573018   64076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:58.584883   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:58.715520   64076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:58.810495   64076 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:58.810565   64076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:58.815745   64076 start.go:563] Will wait 60s for crictl version
	I0920 22:20:58.815808   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:58.819904   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:58.866227   64076 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:58.866299   64076 ssh_runner.go:195] Run: crio --version
	I0920 22:20:58.895897   64076 ssh_runner.go:195] Run: crio --version
	I0920 22:20:58.930268   64076 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:20:55.865946   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:58.365557   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.231551   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:56.732184   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.231519   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.732306   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.231766   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.232250   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.731295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.732178   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.431393   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:01.930337   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:58.931424   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:58.934010   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:58.934261   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:58.934296   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:58.934479   64076 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:58.938795   64076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:58.952249   64076 kubeadm.go:883] updating cluster {Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:58.952359   64076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:20:58.952390   64076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:58.990874   64076 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:20:58.990895   64076 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:20:58.991004   64076 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:58.991004   64076 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:58.991042   64076 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:58.991051   64076 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 22:20:58.991027   64076 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:58.991070   64076 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:58.991094   64076 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:58.991320   64076 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:58.992446   64076 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:58.992445   64076 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:58.992529   64076 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:58.992537   64076 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:58.992888   64076 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.160630   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.162022   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.175945   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.176874   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.206536   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.221391   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.233905   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 22:20:59.279521   64076 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 22:20:59.279570   64076 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.279592   64076 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 22:20:59.279618   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.279624   64076 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.279667   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.320755   64076 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 22:20:59.320802   64076 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.320850   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.321041   64076 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 22:20:59.321076   64076 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.321121   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.339294   64076 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 22:20:59.339346   64076 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.339406   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.368217   64076 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 22:20:59.368271   64076 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.368321   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.462251   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.462299   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.462353   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.462395   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.462450   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.462477   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.603415   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.603435   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.603559   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.603611   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.603653   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.603690   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.740308   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.740311   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.740391   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.756577   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.756670   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.756711   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.825056   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:59.886185   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 22:20:59.886247   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 22:20:59.886320   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.886335   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:20:59.886200   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 22:20:59.886437   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:20:59.905891   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 22:20:59.905934   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 22:20:59.906012   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:20:59.906032   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 22:20:59.906037   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:20:59.906105   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:20:59.944041   64076 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 22:20:59.944085   64076 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:59.944092   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 22:20:59.944109   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.944115   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 22:20:59.944124   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.944156   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.944192   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 22:20:59.944232   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 22:20:59.944290   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 22:20:59.944305   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 22:21:02.634084   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.689904652s)
	I0920 22:21:02.634115   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 22:21:02.634134   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:21:02.634156   64076 ssh_runner.go:235] Completed: which crictl: (2.690013191s)
	I0920 22:21:02.634170   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:21:02.634206   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:00.365987   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:02.366056   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:04.865138   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:01.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:01.731557   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.232222   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.732161   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.231472   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.732256   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.231599   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.731956   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.231815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.731307   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.931240   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:05.932280   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:04.514463   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880270825s)
	I0920 22:21:04.514501   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 22:21:04.514506   64076 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.880277859s)
	I0920 22:21:04.514634   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:04.514530   64076 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:21:04.514708   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:21:07.903361   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.388614598s)
	I0920 22:21:07.903407   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 22:21:07.903447   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:21:07.903457   64076 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.388799941s)
	I0920 22:21:07.903508   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:21:07.903525   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:07.948935   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 22:21:07.949066   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:06.867117   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:09.365054   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:06.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:06.732185   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.232008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.732202   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.232026   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.732223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.231619   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.232303   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.731940   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.432619   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:10.432916   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:09.377740   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.47420389s)
	I0920 22:21:09.377777   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 22:21:09.377799   64076 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:21:09.377799   64076 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.428713149s)
	I0920 22:21:09.377821   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 22:21:09.377855   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:21:11.449954   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.072068983s)
	I0920 22:21:11.449989   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 22:21:11.450021   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:21:11.450070   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:21:11.366565   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:13.865931   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:11.231381   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:11.731874   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.232048   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.732168   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.232175   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.732162   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.231648   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.732176   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.232078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.732039   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.931619   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:14.931872   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:17.431853   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:13.420242   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.970150355s)
	I0920 22:21:13.420269   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 22:21:13.420292   64076 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:13.420327   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:14.070854   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 22:21:14.070905   64076 cache_images.go:123] Successfully loaded all cached images
	I0920 22:21:14.070914   64076 cache_images.go:92] duration metric: took 15.080004062s to LoadCachedImages
	I0920 22:21:14.070930   64076 kubeadm.go:934] updating node { 192.168.72.19 8443 v1.31.1 crio true true} ...
	I0920 22:21:14.071097   64076 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-032884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:21:14.071168   64076 ssh_runner.go:195] Run: crio config
	I0920 22:21:14.124217   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:21:14.124245   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:21:14.124254   64076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:21:14.124274   64076 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.19 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-032884 NodeName:no-preload-032884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:21:14.124421   64076 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-032884"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:21:14.124480   64076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:21:14.136928   64076 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:21:14.136997   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:21:14.147175   64076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 22:21:14.164546   64076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:21:14.181818   64076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0920 22:21:14.199098   64076 ssh_runner.go:195] Run: grep 192.168.72.19	control-plane.minikube.internal$ /etc/hosts
	I0920 22:21:14.203042   64076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:21:14.216620   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:21:14.344665   64076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:21:14.365604   64076 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884 for IP: 192.168.72.19
	I0920 22:21:14.365629   64076 certs.go:194] generating shared ca certs ...
	I0920 22:21:14.365649   64076 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:21:14.365827   64076 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:21:14.365885   64076 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:21:14.365899   64076 certs.go:256] generating profile certs ...
	I0920 22:21:14.366012   64076 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.key
	I0920 22:21:14.366104   64076 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.key.bbadb61c
	I0920 22:21:14.366162   64076 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.key
	I0920 22:21:14.366353   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:21:14.366400   64076 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:21:14.366414   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:21:14.366450   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:21:14.366489   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:21:14.366519   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:21:14.366575   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:21:14.367591   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:21:14.427412   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:21:14.462466   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:21:14.501761   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:21:14.536382   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 22:21:14.571781   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:21:14.598326   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:21:14.623017   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:21:14.648918   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:21:14.675898   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:21:14.701225   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:21:14.726996   64076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:21:14.748808   64076 ssh_runner.go:195] Run: openssl version
	I0920 22:21:14.755400   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:21:14.768456   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.774027   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.774096   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.780817   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:21:14.796292   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:21:14.810475   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.815697   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.815773   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.822267   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:21:14.834264   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:21:14.846103   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.850719   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.850783   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.856898   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:21:14.868699   64076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:21:14.873403   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:21:14.879636   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:21:14.885839   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:21:14.892035   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:21:14.897881   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:21:14.903783   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:21:14.909649   64076 kubeadm.go:392] StartCluster: {Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:21:14.909737   64076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:21:14.909780   64076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:21:14.953105   64076 cri.go:89] found id: ""
	I0920 22:21:14.953180   64076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:21:14.963403   64076 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:21:14.963423   64076 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:21:14.963471   64076 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:21:14.973488   64076 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:21:14.974547   64076 kubeconfig.go:125] found "no-preload-032884" server: "https://192.168.72.19:8443"
	I0920 22:21:14.976821   64076 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:21:14.986722   64076 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.19
	I0920 22:21:14.986757   64076 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:21:14.986770   64076 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:21:14.986842   64076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:21:15.023234   64076 cri.go:89] found id: ""
	I0920 22:21:15.023305   64076 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:21:15.041045   64076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:21:15.051274   64076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:21:15.051297   64076 kubeadm.go:157] found existing configuration files:
	
	I0920 22:21:15.051341   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:21:15.061647   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:21:15.061697   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:21:15.072071   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:21:15.081989   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:21:15.082043   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:21:15.092302   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:21:15.102314   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:21:15.102372   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:21:15.112160   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:21:15.121675   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:21:15.121733   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:21:15.132459   64076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:21:15.143233   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:15.261579   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:16.912557   64076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.650944403s)
	I0920 22:21:16.912593   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.142147   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.220002   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.345551   64076 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:21:17.345659   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.846789   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.365656   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:18.366203   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:16.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.731672   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.232244   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.731640   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.231400   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.231727   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.731713   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.231984   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.732224   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.346730   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.385972   64076 api_server.go:72] duration metric: took 1.040420208s to wait for apiserver process to appear ...
	I0920 22:21:18.385999   64076 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:21:18.386027   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:18.386479   64076 api_server.go:269] stopped: https://192.168.72.19:8443/healthz: Get "https://192.168.72.19:8443/healthz": dial tcp 192.168.72.19:8443: connect: connection refused
	I0920 22:21:18.886429   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.873610   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.873654   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:20.873666   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.955077   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.955105   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:20.955121   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.968034   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.968062   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:21.386680   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:21.391109   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:21:21.391148   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:21:21.886726   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:21.891765   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:21:21.891802   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:21:22.386330   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:22.394934   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 200:
	ok
	I0920 22:21:22.404568   64076 api_server.go:141] control plane version: v1.31.1
	I0920 22:21:22.404599   64076 api_server.go:131] duration metric: took 4.018587856s to wait for apiserver health ...
	I0920 22:21:22.404608   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:21:22.404614   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:21:22.406538   64076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:21:19.930612   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:21.931883   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:22.408344   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:21:22.424308   64076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:21:22.460962   64076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:21:22.477489   64076 system_pods.go:59] 8 kube-system pods found
	I0920 22:21:22.477609   64076 system_pods.go:61] "coredns-7c65d6cfc9-pdcjb" [9c28694b-a51e-416c-9abc-39e692aa2f28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:21:22.477652   64076 system_pods.go:61] "etcd-no-preload-032884" [ee560f0c-02e1-4694-af7c-64a928e77b4d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:21:22.477685   64076 system_pods.go:61] "kube-apiserver-no-preload-032884" [9861f45c-6e90-4756-b045-2d8e3426dc9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:21:22.477706   64076 system_pods.go:61] "kube-controller-manager-no-preload-032884" [f16002bd-2240-4645-a49b-ef9e8c0588e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:21:22.477727   64076 system_pods.go:61] "kube-proxy-mp98d" [75b05d31-33ee-407f-80dc-360b71c17cba] Running
	I0920 22:21:22.477746   64076 system_pods.go:61] "kube-scheduler-no-preload-032884" [72404a91-c915-4f6e-9dbb-d8df10564d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:21:22.477772   64076 system_pods.go:61] "metrics-server-6867b74b74-x9qjw" [ae39ba1f-b2b9-42dc-9f7c-ac44997798aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:21:22.477792   64076 system_pods.go:61] "storage-provisioner" [cbd15398-a74b-4010-8fb1-f90e13d4f682] Running
	I0920 22:21:22.477811   64076 system_pods.go:74] duration metric: took 16.828942ms to wait for pod list to return data ...
	I0920 22:21:22.477835   64076 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:21:22.486981   64076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:21:22.487019   64076 node_conditions.go:123] node cpu capacity is 2
	I0920 22:21:22.487033   64076 node_conditions.go:105] duration metric: took 9.17624ms to run NodePressure ...
	I0920 22:21:22.487056   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:22.780784   64076 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:21:22.785269   64076 kubeadm.go:739] kubelet initialised
	I0920 22:21:22.785300   64076 kubeadm.go:740] duration metric: took 4.478578ms waiting for restarted kubelet to initialise ...
	I0920 22:21:22.785310   64076 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:21:22.800811   64076 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.810781   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.810819   64076 pod_ready.go:82] duration metric: took 9.97275ms for pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.810832   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.810841   64076 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.821154   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "etcd-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.821183   64076 pod_ready.go:82] duration metric: took 10.330752ms for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.821194   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "etcd-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.821202   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.827310   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "kube-apiserver-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.827352   64076 pod_ready.go:82] duration metric: took 6.141087ms for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.827365   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "kube-apiserver-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.827374   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.870394   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.870419   64076 pod_ready.go:82] duration metric: took 43.034455ms for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.870428   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.870434   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mp98d" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:23.264677   64076 pod_ready.go:93] pod "kube-proxy-mp98d" in "kube-system" namespace has status "Ready":"True"
	I0920 22:21:23.264706   64076 pod_ready.go:82] duration metric: took 394.262896ms for pod "kube-proxy-mp98d" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:23.264719   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:20.866321   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:22.866504   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:21.232197   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:21.732280   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.232223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.731989   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.231835   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.731412   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.232159   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.731357   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.731688   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.432658   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:26.930043   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:25.272790   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:27.772961   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:25.366481   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:27.866181   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:26.232179   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:26.731338   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.232243   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.731637   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.231903   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.732246   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.231673   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.732195   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.231646   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.731750   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.930324   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:30.930675   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:29.775001   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:31.771168   64076 pod_ready.go:93] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:21:31.771198   64076 pod_ready.go:82] duration metric: took 8.506469796s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:31.771210   64076 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:30.365604   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:32.366379   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:34.865420   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:31.232139   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:31.732146   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.231399   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.231350   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.731518   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.231301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.731329   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.231639   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.731944   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.430342   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:35.430373   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:37.430529   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:33.779291   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.277659   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:38.278264   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.865728   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:39.364784   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.231341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:36.732138   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.732107   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.232187   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.231592   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.732193   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.232273   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.732121   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.931150   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:42.430690   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:40.778518   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:43.276961   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:41.368031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:43.865439   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:41.231524   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:41.731282   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.232204   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.732205   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.232173   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.732209   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.231528   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.732008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.231572   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.731279   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:45.731374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:45.771664   65164 cri.go:89] found id: ""
	I0920 22:21:45.771685   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.771692   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:45.771697   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:45.771753   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:45.805953   65164 cri.go:89] found id: ""
	I0920 22:21:45.805983   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.805994   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:45.806001   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:45.806070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:45.841111   65164 cri.go:89] found id: ""
	I0920 22:21:45.841140   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.841152   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:45.841159   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:45.841217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:44.430726   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:46.431384   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.277558   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:47.277869   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.865916   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:48.366375   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.877729   65164 cri.go:89] found id: ""
	I0920 22:21:45.877750   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.877758   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:45.877764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:45.877814   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:45.919400   65164 cri.go:89] found id: ""
	I0920 22:21:45.919429   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.919440   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:45.919447   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:45.919504   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:45.957169   65164 cri.go:89] found id: ""
	I0920 22:21:45.957199   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.957220   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:45.957233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:45.957289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:46.001079   65164 cri.go:89] found id: ""
	I0920 22:21:46.001110   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.001121   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:46.001128   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:46.001194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:46.037774   65164 cri.go:89] found id: ""
	I0920 22:21:46.037806   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.037818   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:46.037830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:46.037843   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:46.079990   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:46.080016   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:46.129685   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:46.129718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:46.144045   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:46.144078   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:46.282272   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:46.282303   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:46.282318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:48.855633   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:48.871768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:48.871829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:48.911870   65164 cri.go:89] found id: ""
	I0920 22:21:48.911901   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.911913   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:48.911923   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:48.911991   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:48.950456   65164 cri.go:89] found id: ""
	I0920 22:21:48.950481   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.950492   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:48.950500   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:48.950562   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:48.988787   65164 cri.go:89] found id: ""
	I0920 22:21:48.988817   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.988827   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:48.988835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:48.988883   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:49.028550   65164 cri.go:89] found id: ""
	I0920 22:21:49.028580   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.028590   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:49.028600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:49.028659   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:49.066926   65164 cri.go:89] found id: ""
	I0920 22:21:49.066951   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.066977   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:49.066986   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:49.067047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:49.106777   65164 cri.go:89] found id: ""
	I0920 22:21:49.106807   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.106816   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:49.106822   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:49.106879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:49.143823   65164 cri.go:89] found id: ""
	I0920 22:21:49.143856   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.143867   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:49.143875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:49.143935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:49.181471   65164 cri.go:89] found id: ""
	I0920 22:21:49.181496   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.181504   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:49.181512   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:49.181523   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:49.266480   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:49.266517   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:49.326247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:49.326273   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:49.387982   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:49.388022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:49.404125   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:49.404157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:49.485426   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:48.932370   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.430552   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:49.278734   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.777609   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:50.865089   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:53.365218   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.985703   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:51.999662   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:51.999723   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:52.035758   65164 cri.go:89] found id: ""
	I0920 22:21:52.035790   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.035800   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:52.035808   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:52.035865   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:52.082846   65164 cri.go:89] found id: ""
	I0920 22:21:52.082869   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.082877   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:52.082883   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:52.082954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:52.119258   65164 cri.go:89] found id: ""
	I0920 22:21:52.119299   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.119311   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:52.119319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:52.119380   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:52.155609   65164 cri.go:89] found id: ""
	I0920 22:21:52.155635   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.155645   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:52.155652   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:52.155710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:52.191493   65164 cri.go:89] found id: ""
	I0920 22:21:52.191519   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.191527   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:52.191534   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:52.191608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:52.227088   65164 cri.go:89] found id: ""
	I0920 22:21:52.227121   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.227131   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:52.227138   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:52.227204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:52.264503   65164 cri.go:89] found id: ""
	I0920 22:21:52.264529   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.264538   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:52.264545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:52.264611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:52.302158   65164 cri.go:89] found id: ""
	I0920 22:21:52.302185   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.302193   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:52.302202   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:52.302217   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:52.356130   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:52.356168   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:52.373072   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:52.373096   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:52.446415   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:52.446437   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:52.446454   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:52.527964   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:52.528009   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.077589   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:55.091560   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:55.091615   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:55.127894   65164 cri.go:89] found id: ""
	I0920 22:21:55.127922   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.127949   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:55.127971   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:55.128030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:55.162522   65164 cri.go:89] found id: ""
	I0920 22:21:55.162546   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.162554   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:55.162562   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:55.162617   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:55.199445   65164 cri.go:89] found id: ""
	I0920 22:21:55.199470   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.199480   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:55.199488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:55.199546   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:55.239316   65164 cri.go:89] found id: ""
	I0920 22:21:55.239340   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.239350   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:55.239357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:55.239418   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:55.278165   65164 cri.go:89] found id: ""
	I0920 22:21:55.278194   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.278204   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:55.278212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:55.278270   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:55.313143   65164 cri.go:89] found id: ""
	I0920 22:21:55.313173   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.313184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:55.313192   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:55.313261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:55.352661   65164 cri.go:89] found id: ""
	I0920 22:21:55.352691   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.352699   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:55.352704   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:55.352751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:55.390108   65164 cri.go:89] found id: ""
	I0920 22:21:55.390147   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.390158   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:55.390169   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:55.390184   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:55.403652   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:55.403680   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:55.490456   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:55.490477   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:55.490488   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:55.573164   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:55.573202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.615158   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:55.615188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:53.430808   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.431168   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:57.431504   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:53.778433   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.778670   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:58.278404   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.365277   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:57.365842   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:59.365976   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:58.165907   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:58.180329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:58.180397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:58.215920   65164 cri.go:89] found id: ""
	I0920 22:21:58.215946   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.215954   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:58.215973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:58.216030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:58.253178   65164 cri.go:89] found id: ""
	I0920 22:21:58.253211   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.253222   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:58.253229   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:58.253287   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:58.289345   65164 cri.go:89] found id: ""
	I0920 22:21:58.289373   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.289384   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:58.289391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:58.289451   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:58.326430   65164 cri.go:89] found id: ""
	I0920 22:21:58.326460   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.326468   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:58.326476   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:58.326535   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:58.362723   65164 cri.go:89] found id: ""
	I0920 22:21:58.362749   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.362761   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:58.362769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:58.362824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:58.398827   65164 cri.go:89] found id: ""
	I0920 22:21:58.398857   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.398867   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:58.398875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:58.398928   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:58.444833   65164 cri.go:89] found id: ""
	I0920 22:21:58.444864   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.444875   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:58.444883   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:58.444948   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:58.482692   65164 cri.go:89] found id: ""
	I0920 22:21:58.482723   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.482735   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:58.482745   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:58.482756   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:58.567138   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:58.567172   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:58.606392   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:58.606421   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:58.658568   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:58.658605   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:58.672830   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:58.672855   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:58.742602   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:59.930574   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:02.431132   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:00.279246   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:02.778434   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:01.865439   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:04.366915   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:01.243532   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:01.257408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:01.257493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:01.295259   65164 cri.go:89] found id: ""
	I0920 22:22:01.295284   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.295292   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:01.295297   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:01.295348   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:01.334119   65164 cri.go:89] found id: ""
	I0920 22:22:01.334145   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.334157   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:01.334164   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:01.334222   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:01.376955   65164 cri.go:89] found id: ""
	I0920 22:22:01.376985   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.376997   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:01.377005   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:01.377063   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:01.416680   65164 cri.go:89] found id: ""
	I0920 22:22:01.416703   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.416712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:01.416718   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:01.416783   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:01.455042   65164 cri.go:89] found id: ""
	I0920 22:22:01.455064   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.455072   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:01.455077   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:01.455135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:01.499243   65164 cri.go:89] found id: ""
	I0920 22:22:01.499271   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.499283   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:01.499290   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:01.499347   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:01.535757   65164 cri.go:89] found id: ""
	I0920 22:22:01.535793   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.535804   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:01.535812   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:01.535876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:01.574409   65164 cri.go:89] found id: ""
	I0920 22:22:01.574447   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.574459   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:01.574470   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:01.574485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:01.615629   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:01.615659   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:01.668446   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:01.668481   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:01.683044   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:01.683071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:01.751190   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:01.751210   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:01.751221   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.335881   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:04.351092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:04.351149   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:04.386306   65164 cri.go:89] found id: ""
	I0920 22:22:04.386337   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.386347   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:04.386365   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:04.386424   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:04.419440   65164 cri.go:89] found id: ""
	I0920 22:22:04.419472   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.419502   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:04.419511   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:04.419571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:04.456005   65164 cri.go:89] found id: ""
	I0920 22:22:04.456043   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.456054   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:04.456061   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:04.456125   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:04.490878   65164 cri.go:89] found id: ""
	I0920 22:22:04.490909   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.490919   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:04.490927   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:04.491000   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:04.527312   65164 cri.go:89] found id: ""
	I0920 22:22:04.527342   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.527353   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:04.527361   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:04.527441   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:04.564441   65164 cri.go:89] found id: ""
	I0920 22:22:04.564471   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.564481   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:04.564487   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:04.564542   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:04.598843   65164 cri.go:89] found id: ""
	I0920 22:22:04.598870   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.598878   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:04.598884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:04.598946   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:04.632575   65164 cri.go:89] found id: ""
	I0920 22:22:04.632604   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.632615   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:04.632625   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:04.632640   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:04.683629   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:04.683668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:04.697138   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:04.697169   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:04.768051   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:04.768076   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:04.768087   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.848054   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:04.848102   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:04.930934   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.431086   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:05.279145   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.778351   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:06.865475   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:08.865586   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.396960   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:07.410612   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:07.410681   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:07.450652   65164 cri.go:89] found id: ""
	I0920 22:22:07.450680   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.450688   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:07.450694   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:07.450739   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:07.491547   65164 cri.go:89] found id: ""
	I0920 22:22:07.491577   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.491588   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:07.491595   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:07.491652   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:07.532025   65164 cri.go:89] found id: ""
	I0920 22:22:07.532060   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.532069   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:07.532076   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:07.532135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:07.574139   65164 cri.go:89] found id: ""
	I0920 22:22:07.574175   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.574186   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:07.574194   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:07.574257   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:07.614572   65164 cri.go:89] found id: ""
	I0920 22:22:07.614599   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.614610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:07.614618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:07.614679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:07.651975   65164 cri.go:89] found id: ""
	I0920 22:22:07.652012   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.652022   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:07.652029   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:07.652091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:07.690273   65164 cri.go:89] found id: ""
	I0920 22:22:07.690300   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.690311   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:07.690319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:07.690379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:07.727144   65164 cri.go:89] found id: ""
	I0920 22:22:07.727169   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.727179   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:07.727189   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:07.727204   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:07.784607   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:07.784645   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:07.800082   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:07.800113   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:07.872842   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:07.872870   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:07.872885   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:07.953169   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:07.953205   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:10.496094   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:10.509645   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:10.509711   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:10.544590   65164 cri.go:89] found id: ""
	I0920 22:22:10.544620   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.544628   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:10.544634   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:10.544690   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:10.581062   65164 cri.go:89] found id: ""
	I0920 22:22:10.581091   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.581100   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:10.581106   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:10.581166   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:10.616337   65164 cri.go:89] found id: ""
	I0920 22:22:10.616363   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.616372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:10.616380   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:10.616438   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:10.655747   65164 cri.go:89] found id: ""
	I0920 22:22:10.655769   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.655777   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:10.655784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:10.655832   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:10.689158   65164 cri.go:89] found id: ""
	I0920 22:22:10.689188   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.689199   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:10.689207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:10.689269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:10.724321   65164 cri.go:89] found id: ""
	I0920 22:22:10.724355   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.724369   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:10.724376   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:10.724447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:10.764105   65164 cri.go:89] found id: ""
	I0920 22:22:10.764134   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.764143   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:10.764148   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:10.764217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:10.802620   65164 cri.go:89] found id: ""
	I0920 22:22:10.802650   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.802664   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:10.802674   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:10.802687   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:10.854739   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:10.854773   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:09.931917   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:12.430948   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.279412   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:12.778166   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.866822   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:13.366013   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.869974   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:10.869996   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:10.945364   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:10.945386   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:10.945399   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:11.029379   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:11.029419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:13.576924   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:13.591641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:13.591716   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:13.626683   65164 cri.go:89] found id: ""
	I0920 22:22:13.626710   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.626718   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:13.626724   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:13.626770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:13.666368   65164 cri.go:89] found id: ""
	I0920 22:22:13.666400   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.666408   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:13.666413   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:13.666469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:13.702596   65164 cri.go:89] found id: ""
	I0920 22:22:13.702618   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.702634   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:13.702642   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:13.702706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:13.736359   65164 cri.go:89] found id: ""
	I0920 22:22:13.736382   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.736390   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:13.736396   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:13.736446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:13.770584   65164 cri.go:89] found id: ""
	I0920 22:22:13.770615   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.770626   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:13.770633   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:13.770693   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:13.808637   65164 cri.go:89] found id: ""
	I0920 22:22:13.808668   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.808680   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:13.808688   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:13.808748   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:13.844429   65164 cri.go:89] found id: ""
	I0920 22:22:13.844476   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.844489   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:13.844497   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:13.844563   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:13.882730   65164 cri.go:89] found id: ""
	I0920 22:22:13.882759   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.882770   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:13.882778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:13.882790   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:13.939998   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:13.940028   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:13.954014   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:13.954043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:14.031991   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:14.032016   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:14.032030   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:14.109378   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:14.109413   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:14.431794   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:16.932980   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:15.277777   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:17.779835   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:15.865687   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:17.865735   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:19.866465   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:16.646761   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:16.660458   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:16.660522   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:16.695512   65164 cri.go:89] found id: ""
	I0920 22:22:16.695537   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.695545   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:16.695552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:16.695612   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:16.740479   65164 cri.go:89] found id: ""
	I0920 22:22:16.740508   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.740518   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:16.740529   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:16.740589   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:16.777259   65164 cri.go:89] found id: ""
	I0920 22:22:16.777291   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.777302   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:16.777309   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:16.777355   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:16.833157   65164 cri.go:89] found id: ""
	I0920 22:22:16.833187   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.833204   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:16.833212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:16.833272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:16.868589   65164 cri.go:89] found id: ""
	I0920 22:22:16.868618   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.868627   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:16.868635   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:16.868694   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:16.907229   65164 cri.go:89] found id: ""
	I0920 22:22:16.907257   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.907269   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:16.907277   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:16.907336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:16.945317   65164 cri.go:89] found id: ""
	I0920 22:22:16.945343   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.945351   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:16.945357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:16.945413   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:16.982159   65164 cri.go:89] found id: ""
	I0920 22:22:16.982183   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.982197   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:16.982206   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:16.982218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:17.034369   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:17.034416   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:17.049644   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:17.049677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:17.127816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:17.127840   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:17.127852   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:17.217213   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:17.217248   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:19.766957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:19.781545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:19.781627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:19.819616   65164 cri.go:89] found id: ""
	I0920 22:22:19.819642   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.819650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:19.819656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:19.819717   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:19.860983   65164 cri.go:89] found id: ""
	I0920 22:22:19.861009   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.861020   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:19.861027   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:19.861083   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:19.904051   65164 cri.go:89] found id: ""
	I0920 22:22:19.904075   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.904083   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:19.904088   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:19.904153   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:19.940246   65164 cri.go:89] found id: ""
	I0920 22:22:19.940273   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.940280   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:19.940286   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:19.940332   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:19.980380   65164 cri.go:89] found id: ""
	I0920 22:22:19.980404   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.980412   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:19.980418   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:19.980496   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:20.019603   65164 cri.go:89] found id: ""
	I0920 22:22:20.019625   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.019633   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:20.019639   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:20.019689   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:20.060667   65164 cri.go:89] found id: ""
	I0920 22:22:20.060696   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.060704   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:20.060709   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:20.060766   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:20.104432   65164 cri.go:89] found id: ""
	I0920 22:22:20.104464   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.104475   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:20.104486   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:20.104500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:20.148433   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:20.148466   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:20.202370   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:20.202407   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:20.216482   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:20.216515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:20.297053   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:20.297071   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:20.297086   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:18.936098   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:21.430878   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:20.277726   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.777347   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.366027   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:24.366241   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.879569   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:22.893624   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:22.893683   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:22.937850   65164 cri.go:89] found id: ""
	I0920 22:22:22.937878   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.937886   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:22.937896   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:22.937944   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:22.974939   65164 cri.go:89] found id: ""
	I0920 22:22:22.974971   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.974983   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:22.974990   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:22.975057   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:23.013689   65164 cri.go:89] found id: ""
	I0920 22:22:23.013723   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.013735   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:23.013742   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:23.013802   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:23.052092   65164 cri.go:89] found id: ""
	I0920 22:22:23.052123   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.052134   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:23.052141   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:23.052201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:23.092204   65164 cri.go:89] found id: ""
	I0920 22:22:23.092231   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.092248   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:23.092255   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:23.092317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:23.146265   65164 cri.go:89] found id: ""
	I0920 22:22:23.146294   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.146304   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:23.146312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:23.146373   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:23.183802   65164 cri.go:89] found id: ""
	I0920 22:22:23.183832   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.183843   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:23.183865   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:23.183932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:23.219604   65164 cri.go:89] found id: ""
	I0920 22:22:23.219633   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.219643   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:23.219654   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:23.219671   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:23.268138   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:23.268171   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:23.285309   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:23.285343   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:23.362940   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:23.362985   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:23.363006   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:23.450369   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:23.450411   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:23.431589   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:25.930565   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:24.778456   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:27.278474   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:26.866577   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:29.364505   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:26.001038   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:26.016893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:26.016956   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:26.053611   65164 cri.go:89] found id: ""
	I0920 22:22:26.053641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.053653   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:26.053661   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:26.053710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:26.093966   65164 cri.go:89] found id: ""
	I0920 22:22:26.093994   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.094005   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:26.094012   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:26.094077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:26.134321   65164 cri.go:89] found id: ""
	I0920 22:22:26.134360   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.134372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:26.134379   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:26.134442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:26.171597   65164 cri.go:89] found id: ""
	I0920 22:22:26.171641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.171651   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:26.171659   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:26.171730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:26.207751   65164 cri.go:89] found id: ""
	I0920 22:22:26.207778   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.207787   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:26.207793   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:26.207845   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:26.249174   65164 cri.go:89] found id: ""
	I0920 22:22:26.249201   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.249217   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:26.249225   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:26.249286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:26.284627   65164 cri.go:89] found id: ""
	I0920 22:22:26.284656   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.284666   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:26.284673   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:26.284729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:26.323942   65164 cri.go:89] found id: ""
	I0920 22:22:26.323968   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.323980   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:26.323991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:26.324005   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:26.379698   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:26.379737   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:26.394553   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:26.394584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:26.472301   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:26.472328   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:26.472354   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:26.549464   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:26.549509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.091838   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:29.115120   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:29.115198   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:29.161609   65164 cri.go:89] found id: ""
	I0920 22:22:29.161632   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.161640   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:29.161646   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:29.161709   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:29.197886   65164 cri.go:89] found id: ""
	I0920 22:22:29.197912   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.197920   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:29.197926   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:29.197978   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:29.236953   65164 cri.go:89] found id: ""
	I0920 22:22:29.236980   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.236991   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:29.236999   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:29.237059   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:29.273573   65164 cri.go:89] found id: ""
	I0920 22:22:29.273603   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.273614   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:29.273622   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:29.273679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:29.308752   65164 cri.go:89] found id: ""
	I0920 22:22:29.308776   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.308784   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:29.308789   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:29.308847   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:29.348578   65164 cri.go:89] found id: ""
	I0920 22:22:29.348608   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.348620   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:29.348627   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:29.348685   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:29.385184   65164 cri.go:89] found id: ""
	I0920 22:22:29.385225   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.385237   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:29.385246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:29.385308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:29.421935   65164 cri.go:89] found id: ""
	I0920 22:22:29.421968   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.421979   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:29.421990   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:29.422012   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:29.438574   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:29.438606   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:29.515334   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:29.515362   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:29.515376   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:29.602411   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:29.602451   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.646284   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:29.646320   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:27.931022   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:30.430935   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:32.431673   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:29.278850   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:31.778403   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:31.364592   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:33.864487   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:32.196699   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:32.211260   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:32.211338   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:32.248007   65164 cri.go:89] found id: ""
	I0920 22:22:32.248040   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.248052   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:32.248060   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:32.248122   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:32.285451   65164 cri.go:89] found id: ""
	I0920 22:22:32.285481   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.285490   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:32.285496   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:32.285548   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:32.321721   65164 cri.go:89] found id: ""
	I0920 22:22:32.321746   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.321756   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:32.321768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:32.321827   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:32.363669   65164 cri.go:89] found id: ""
	I0920 22:22:32.363698   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.363708   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:32.363717   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:32.363770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:32.402681   65164 cri.go:89] found id: ""
	I0920 22:22:32.402721   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.402729   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:32.402735   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:32.402786   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:32.445648   65164 cri.go:89] found id: ""
	I0920 22:22:32.445674   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.445684   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:32.445691   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:32.445750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:32.482291   65164 cri.go:89] found id: ""
	I0920 22:22:32.482315   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.482330   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:32.482337   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:32.482397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:32.523095   65164 cri.go:89] found id: ""
	I0920 22:22:32.523124   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.523134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:32.523145   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:32.523160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:32.537459   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:32.537489   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:32.614828   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:32.614853   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:32.614867   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:32.695690   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:32.695728   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:32.739620   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:32.739658   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:35.296323   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:35.312341   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:35.312417   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:35.349934   65164 cri.go:89] found id: ""
	I0920 22:22:35.349962   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.349972   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:35.349980   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:35.350052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:35.385427   65164 cri.go:89] found id: ""
	I0920 22:22:35.385457   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.385468   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:35.385475   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:35.385538   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:35.422118   65164 cri.go:89] found id: ""
	I0920 22:22:35.422146   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.422157   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:35.422165   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:35.422230   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:35.458552   65164 cri.go:89] found id: ""
	I0920 22:22:35.458573   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.458580   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:35.458586   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:35.458639   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:35.493575   65164 cri.go:89] found id: ""
	I0920 22:22:35.493601   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.493610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:35.493615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:35.493671   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:35.530316   65164 cri.go:89] found id: ""
	I0920 22:22:35.530342   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.530350   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:35.530356   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:35.530411   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:35.568247   65164 cri.go:89] found id: ""
	I0920 22:22:35.568279   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.568289   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:35.568303   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:35.568367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:35.605207   65164 cri.go:89] found id: ""
	I0920 22:22:35.605234   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.605242   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:35.605255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:35.605267   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:35.619771   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:35.619797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:35.699917   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:35.699939   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:35.699950   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:35.785310   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:35.785346   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:35.849112   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:35.849147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:34.930225   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:36.931840   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:34.277310   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:36.278180   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:35.866985   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.364684   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.405695   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:38.419499   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:38.419580   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:38.461618   65164 cri.go:89] found id: ""
	I0920 22:22:38.461642   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.461650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:38.461656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:38.461713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:38.504837   65164 cri.go:89] found id: ""
	I0920 22:22:38.504859   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.504867   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:38.504872   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:38.504923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:38.539399   65164 cri.go:89] found id: ""
	I0920 22:22:38.539434   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.539446   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:38.539453   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:38.539518   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:38.576129   65164 cri.go:89] found id: ""
	I0920 22:22:38.576155   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.576165   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:38.576173   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:38.576233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:38.611383   65164 cri.go:89] found id: ""
	I0920 22:22:38.611411   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.611423   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:38.611430   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:38.611491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:38.650553   65164 cri.go:89] found id: ""
	I0920 22:22:38.650583   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.650594   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:38.650602   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:38.650668   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:38.686734   65164 cri.go:89] found id: ""
	I0920 22:22:38.686756   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.686764   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:38.686769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:38.686829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:38.726268   65164 cri.go:89] found id: ""
	I0920 22:22:38.726301   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.726312   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:38.726323   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:38.726336   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:38.801995   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:38.802042   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:38.843775   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:38.843811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:38.897874   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:38.897909   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:38.912111   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:38.912136   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:38.989021   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:38.932311   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.432699   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.778194   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.277875   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:43.278108   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:40.364970   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:42.366255   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:44.865678   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.489341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:41.515667   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:41.515724   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:41.567019   65164 cri.go:89] found id: ""
	I0920 22:22:41.567047   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.567057   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:41.567066   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:41.567128   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:41.625316   65164 cri.go:89] found id: ""
	I0920 22:22:41.625347   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.625357   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:41.625371   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:41.625436   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:41.663420   65164 cri.go:89] found id: ""
	I0920 22:22:41.663448   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.663459   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:41.663467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:41.663529   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:41.701639   65164 cri.go:89] found id: ""
	I0920 22:22:41.701670   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.701682   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:41.701689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:41.701751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:41.739740   65164 cri.go:89] found id: ""
	I0920 22:22:41.739768   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.739778   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:41.739786   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:41.739843   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:41.781365   65164 cri.go:89] found id: ""
	I0920 22:22:41.781390   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.781401   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:41.781408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:41.781467   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:41.818708   65164 cri.go:89] found id: ""
	I0920 22:22:41.818732   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.818740   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:41.818746   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:41.818793   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:41.854193   65164 cri.go:89] found id: ""
	I0920 22:22:41.854226   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.854237   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:41.854249   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:41.854266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:41.923678   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:41.923701   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:41.923716   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:42.006009   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:42.006045   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:42.044745   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:42.044777   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:42.099462   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:42.099500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:44.614787   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:44.629100   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:44.629178   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:44.672347   65164 cri.go:89] found id: ""
	I0920 22:22:44.672381   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.672392   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:44.672401   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:44.672525   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:44.717617   65164 cri.go:89] found id: ""
	I0920 22:22:44.717646   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.717657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:44.717664   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:44.717730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:44.753035   65164 cri.go:89] found id: ""
	I0920 22:22:44.753062   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.753073   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:44.753092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:44.753165   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:44.790745   65164 cri.go:89] found id: ""
	I0920 22:22:44.790774   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.790785   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:44.790792   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:44.790859   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:44.828604   65164 cri.go:89] found id: ""
	I0920 22:22:44.828630   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.828640   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:44.828647   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:44.828706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:44.865982   65164 cri.go:89] found id: ""
	I0920 22:22:44.866008   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.866019   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:44.866026   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:44.866077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:44.903473   65164 cri.go:89] found id: ""
	I0920 22:22:44.903499   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.903508   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:44.903514   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:44.903571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:44.943481   65164 cri.go:89] found id: ""
	I0920 22:22:44.943509   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.943521   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:44.943530   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:44.943543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:44.996783   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:44.996823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:45.011531   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:45.011560   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:45.087225   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:45.087252   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:45.087266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:45.166460   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:45.166500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:43.931267   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:46.430497   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:45.778462   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.783057   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.365053   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:49.864478   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.709426   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:47.723507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:47.723590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:47.758203   65164 cri.go:89] found id: ""
	I0920 22:22:47.758228   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.758238   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:47.758246   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:47.758304   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:47.796098   65164 cri.go:89] found id: ""
	I0920 22:22:47.796129   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.796140   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:47.796148   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:47.796210   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:47.831860   65164 cri.go:89] found id: ""
	I0920 22:22:47.831889   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.831899   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:47.831906   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:47.831966   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:47.866241   65164 cri.go:89] found id: ""
	I0920 22:22:47.866266   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.866274   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:47.866281   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:47.866337   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:47.905029   65164 cri.go:89] found id: ""
	I0920 22:22:47.905055   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.905063   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:47.905073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:47.905124   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:47.942290   65164 cri.go:89] found id: ""
	I0920 22:22:47.942322   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.942332   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:47.942340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:47.942400   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:47.977901   65164 cri.go:89] found id: ""
	I0920 22:22:47.977930   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.977939   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:47.977946   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:47.978012   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:48.014682   65164 cri.go:89] found id: ""
	I0920 22:22:48.014715   65164 logs.go:276] 0 containers: []
	W0920 22:22:48.014726   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:48.014736   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:48.014749   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:48.067116   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:48.067160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:48.082926   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:48.082980   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:48.158209   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:48.158231   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:48.158243   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:48.239760   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:48.239866   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:50.788520   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:50.802410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:50.802485   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:50.844263   65164 cri.go:89] found id: ""
	I0920 22:22:50.844291   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.844303   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:50.844311   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:50.844367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:48.931174   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:51.431910   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:50.277510   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:52.279387   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:51.866270   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:54.366161   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:50.880271   65164 cri.go:89] found id: ""
	I0920 22:22:50.880301   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.880311   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:50.880324   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:50.880379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:50.915627   65164 cri.go:89] found id: ""
	I0920 22:22:50.915653   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.915663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:50.915671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:50.915725   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:50.951779   65164 cri.go:89] found id: ""
	I0920 22:22:50.951804   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.951823   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:50.951831   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:50.951879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:50.989174   65164 cri.go:89] found id: ""
	I0920 22:22:50.989201   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.989212   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:50.989219   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:50.989293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:51.024049   65164 cri.go:89] found id: ""
	I0920 22:22:51.024084   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.024094   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:51.024106   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:51.024167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:51.060837   65164 cri.go:89] found id: ""
	I0920 22:22:51.060865   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.060877   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:51.060884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:51.060934   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:51.099248   65164 cri.go:89] found id: ""
	I0920 22:22:51.099272   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.099285   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:51.099295   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:51.099308   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:51.148388   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:51.148420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:51.162735   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:51.162763   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:51.239020   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:51.239048   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:51.239063   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:51.318893   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:51.318934   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:53.860664   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:53.874434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:53.874491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:53.909449   65164 cri.go:89] found id: ""
	I0920 22:22:53.909475   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.909485   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:53.909493   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:53.909550   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:53.944367   65164 cri.go:89] found id: ""
	I0920 22:22:53.944395   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.944406   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:53.944414   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:53.944469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:53.980994   65164 cri.go:89] found id: ""
	I0920 22:22:53.981032   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.981043   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:53.981051   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:53.981110   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:54.018456   65164 cri.go:89] found id: ""
	I0920 22:22:54.018494   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.018506   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:54.018515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:54.018574   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:54.053763   65164 cri.go:89] found id: ""
	I0920 22:22:54.053793   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.053805   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:54.053813   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:54.053871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:54.091847   65164 cri.go:89] found id: ""
	I0920 22:22:54.091871   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.091883   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:54.091890   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:54.091951   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:54.126844   65164 cri.go:89] found id: ""
	I0920 22:22:54.126875   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.126885   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:54.126893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:54.126949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:54.168967   65164 cri.go:89] found id: ""
	I0920 22:22:54.168999   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.169010   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:54.169022   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:54.169043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:54.237771   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:54.237791   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:54.237806   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:54.319629   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:54.319668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:54.364470   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:54.364504   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:54.418798   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:54.418835   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:53.931126   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.430892   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:54.777269   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.777539   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.865494   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:59.365274   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.935104   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:56.948472   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:56.948545   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:56.983338   65164 cri.go:89] found id: ""
	I0920 22:22:56.983365   65164 logs.go:276] 0 containers: []
	W0920 22:22:56.983373   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:56.983379   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:56.983425   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:57.021781   65164 cri.go:89] found id: ""
	I0920 22:22:57.021809   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.021819   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:57.021825   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:57.021871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:57.057191   65164 cri.go:89] found id: ""
	I0920 22:22:57.057231   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.057239   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:57.057246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:57.057294   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:57.092567   65164 cri.go:89] found id: ""
	I0920 22:22:57.092595   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.092607   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:57.092615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:57.092669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:57.128098   65164 cri.go:89] found id: ""
	I0920 22:22:57.128125   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.128136   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:57.128143   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:57.128205   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:57.163001   65164 cri.go:89] found id: ""
	I0920 22:22:57.163029   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.163040   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:57.163048   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:57.163105   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:57.199392   65164 cri.go:89] found id: ""
	I0920 22:22:57.199424   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.199434   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:57.199442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:57.199491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:57.235132   65164 cri.go:89] found id: ""
	I0920 22:22:57.235155   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.235162   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:57.235170   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:57.235181   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:57.287112   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:57.287148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:57.301293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:57.301318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:57.379895   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:57.379920   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:57.379935   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:57.466022   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:57.466059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.006738   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:00.019871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:00.019949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:00.054127   65164 cri.go:89] found id: ""
	I0920 22:23:00.054155   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.054162   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:00.054168   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:00.054228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:00.089755   65164 cri.go:89] found id: ""
	I0920 22:23:00.089788   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.089800   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:00.089807   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:00.089867   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:00.125466   65164 cri.go:89] found id: ""
	I0920 22:23:00.125489   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.125497   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:00.125503   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:00.125559   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:00.166147   65164 cri.go:89] found id: ""
	I0920 22:23:00.166171   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.166178   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:00.166187   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:00.166248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:00.202752   65164 cri.go:89] found id: ""
	I0920 22:23:00.202781   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.202793   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:00.202801   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:00.202862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:00.240727   65164 cri.go:89] found id: ""
	I0920 22:23:00.240757   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.240768   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:00.240776   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:00.240833   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:00.277247   65164 cri.go:89] found id: ""
	I0920 22:23:00.277270   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.277278   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:00.277285   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:00.277341   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:00.313098   65164 cri.go:89] found id: ""
	I0920 22:23:00.313123   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.313134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:00.313144   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:00.313160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:00.328326   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:00.328353   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:00.397929   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:00.397961   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:00.397975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:00.479830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:00.479862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.519715   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:00.519745   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:58.929948   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:00.930012   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:58.778239   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:00.778442   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.279610   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:01.366521   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.366771   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.073360   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:03.088169   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:03.088236   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:03.129646   65164 cri.go:89] found id: ""
	I0920 22:23:03.129673   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.129682   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:03.129690   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:03.129751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:03.163771   65164 cri.go:89] found id: ""
	I0920 22:23:03.163801   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.163812   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:03.163820   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:03.163876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:03.201253   65164 cri.go:89] found id: ""
	I0920 22:23:03.201291   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.201299   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:03.201317   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:03.201374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:03.239623   65164 cri.go:89] found id: ""
	I0920 22:23:03.239652   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.239663   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:03.239670   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:03.239731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:03.275530   65164 cri.go:89] found id: ""
	I0920 22:23:03.275555   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.275564   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:03.275572   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:03.275631   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:03.312812   65164 cri.go:89] found id: ""
	I0920 22:23:03.312845   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.312855   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:03.312863   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:03.312923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:03.352975   65164 cri.go:89] found id: ""
	I0920 22:23:03.352998   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.353009   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:03.353016   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:03.353070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:03.390815   65164 cri.go:89] found id: ""
	I0920 22:23:03.390846   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.390856   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:03.390867   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:03.390880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:03.478231   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:03.478269   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:03.524991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:03.525024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:03.580114   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:03.580159   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:03.596120   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:03.596147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:03.671302   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:02.931281   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.429671   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:07.430177   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.781406   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:08.278876   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.367069   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:07.866058   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:09.866187   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:06.171484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:06.186410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:06.186480   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:06.222102   65164 cri.go:89] found id: ""
	I0920 22:23:06.222131   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.222139   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:06.222148   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:06.222206   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:06.257922   65164 cri.go:89] found id: ""
	I0920 22:23:06.257945   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.257953   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:06.257958   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:06.258006   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:06.299149   65164 cri.go:89] found id: ""
	I0920 22:23:06.299176   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.299184   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:06.299190   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:06.299244   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:06.336667   65164 cri.go:89] found id: ""
	I0920 22:23:06.336693   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.336703   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:06.336710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:06.336772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:06.379397   65164 cri.go:89] found id: ""
	I0920 22:23:06.379425   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.379435   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:06.379442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:06.379505   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:06.415945   65164 cri.go:89] found id: ""
	I0920 22:23:06.415976   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.415987   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:06.415993   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:06.416051   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:06.452902   65164 cri.go:89] found id: ""
	I0920 22:23:06.452932   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.452955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:06.452962   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:06.453033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:06.490284   65164 cri.go:89] found id: ""
	I0920 22:23:06.490314   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.490324   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:06.490334   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:06.490351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:06.529838   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:06.529862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:06.582421   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:06.582461   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:06.596504   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:06.596542   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:06.667816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:06.667836   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:06.667851   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.247952   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:09.264316   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:09.264386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:09.300577   65164 cri.go:89] found id: ""
	I0920 22:23:09.300612   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.300621   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:09.300627   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:09.300674   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:09.345945   65164 cri.go:89] found id: ""
	I0920 22:23:09.345972   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.345982   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:09.345991   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:09.346049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:09.385660   65164 cri.go:89] found id: ""
	I0920 22:23:09.385690   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.385700   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:09.385708   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:09.385775   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:09.422286   65164 cri.go:89] found id: ""
	I0920 22:23:09.422321   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.422332   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:09.422338   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:09.422397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:09.458377   65164 cri.go:89] found id: ""
	I0920 22:23:09.458402   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.458410   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:09.458416   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:09.458477   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:09.499470   65164 cri.go:89] found id: ""
	I0920 22:23:09.499498   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.499509   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:09.499517   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:09.499603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:09.535725   65164 cri.go:89] found id: ""
	I0920 22:23:09.535751   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.535759   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:09.535764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:09.535812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:09.575755   65164 cri.go:89] found id: ""
	I0920 22:23:09.575784   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.575795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:09.575807   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:09.575825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:09.590265   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:09.590294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:09.661559   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:09.661585   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:09.661617   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.741099   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:09.741143   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:09.785560   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:09.785589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:09.431051   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:11.930405   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:10.781647   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:13.278000   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:12.366060   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:14.864332   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:12.344085   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:12.358919   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:12.359021   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:12.397182   65164 cri.go:89] found id: ""
	I0920 22:23:12.397220   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.397229   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:12.397235   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:12.397293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:12.435133   65164 cri.go:89] found id: ""
	I0920 22:23:12.435162   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.435169   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:12.435177   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:12.435237   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:12.474391   65164 cri.go:89] found id: ""
	I0920 22:23:12.474416   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.474427   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:12.474434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:12.474493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:12.511992   65164 cri.go:89] found id: ""
	I0920 22:23:12.512015   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.512022   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:12.512028   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:12.512071   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:12.547248   65164 cri.go:89] found id: ""
	I0920 22:23:12.547269   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.547278   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:12.547289   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:12.547336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:12.582721   65164 cri.go:89] found id: ""
	I0920 22:23:12.582747   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.582759   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:12.582766   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:12.582837   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:12.620184   65164 cri.go:89] found id: ""
	I0920 22:23:12.620207   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.620215   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:12.620220   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:12.620272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:12.657416   65164 cri.go:89] found id: ""
	I0920 22:23:12.657441   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.657449   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:12.657458   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:12.657477   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:12.671189   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:12.671218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:12.742115   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:12.742139   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:12.742150   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:12.817551   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:12.817586   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:12.861940   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:12.861970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:15.413615   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:15.427127   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:15.427201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:15.462329   65164 cri.go:89] found id: ""
	I0920 22:23:15.462357   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.462366   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:15.462373   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:15.462430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:15.508791   65164 cri.go:89] found id: ""
	I0920 22:23:15.508816   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.508824   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:15.508834   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:15.508894   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:15.545381   65164 cri.go:89] found id: ""
	I0920 22:23:15.545403   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.545411   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:15.545417   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:15.545475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:15.585401   65164 cri.go:89] found id: ""
	I0920 22:23:15.585422   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.585430   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:15.585436   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:15.585494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:15.631507   65164 cri.go:89] found id: ""
	I0920 22:23:15.631529   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.631542   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:15.631549   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:15.631608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:15.671219   65164 cri.go:89] found id: ""
	I0920 22:23:15.671247   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.671270   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:15.671278   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:15.671333   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:15.711348   65164 cri.go:89] found id: ""
	I0920 22:23:15.711382   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.711393   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:15.711400   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:15.711463   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:15.750582   65164 cri.go:89] found id: ""
	I0920 22:23:15.750614   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.750624   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:15.750635   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:15.750649   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:15.766294   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:15.766321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:15.839079   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:15.839104   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:15.839117   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:14.430673   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:16.430824   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:15.278107   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:17.778678   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:16.866212   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:18.866838   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:15.920123   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:15.920156   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:15.964292   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:15.964321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.517484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:18.531117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:18.531192   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:18.567351   65164 cri.go:89] found id: ""
	I0920 22:23:18.567377   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.567389   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:18.567397   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:18.567447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:18.603319   65164 cri.go:89] found id: ""
	I0920 22:23:18.603344   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.603353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:18.603358   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:18.603408   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:18.639699   65164 cri.go:89] found id: ""
	I0920 22:23:18.639726   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.639737   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:18.639744   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:18.639806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:18.681880   65164 cri.go:89] found id: ""
	I0920 22:23:18.681910   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.681931   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:18.681939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:18.682002   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:18.716589   65164 cri.go:89] found id: ""
	I0920 22:23:18.716619   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.716629   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:18.716637   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:18.716696   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:18.755280   65164 cri.go:89] found id: ""
	I0920 22:23:18.755310   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.755321   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:18.755329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:18.755390   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:18.791915   65164 cri.go:89] found id: ""
	I0920 22:23:18.791944   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.791955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:18.791963   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:18.792033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:18.831544   65164 cri.go:89] found id: ""
	I0920 22:23:18.831568   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.831576   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:18.831584   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:18.831593   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.882024   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:18.882059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:18.898191   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:18.898226   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:18.970613   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:18.970643   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:18.970657   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:19.046112   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:19.046145   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:18.931618   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.431146   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:20.278885   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:22.777640   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.366031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:23.865349   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.586078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:21.599467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:21.599541   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:21.638708   65164 cri.go:89] found id: ""
	I0920 22:23:21.638733   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.638741   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:21.638747   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:21.638851   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:21.680546   65164 cri.go:89] found id: ""
	I0920 22:23:21.680574   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.680583   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:21.680589   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:21.680643   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:21.724122   65164 cri.go:89] found id: ""
	I0920 22:23:21.724149   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.724160   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:21.724167   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:21.724224   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:21.758447   65164 cri.go:89] found id: ""
	I0920 22:23:21.758474   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.758482   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:21.758488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:21.758554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:21.799562   65164 cri.go:89] found id: ""
	I0920 22:23:21.799586   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.799594   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:21.799600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:21.799658   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:21.836768   65164 cri.go:89] found id: ""
	I0920 22:23:21.836799   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.836808   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:21.836814   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:21.836871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:21.876750   65164 cri.go:89] found id: ""
	I0920 22:23:21.876770   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.876778   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:21.876784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:21.876873   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:21.912756   65164 cri.go:89] found id: ""
	I0920 22:23:21.912783   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.912792   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:21.912800   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:21.912813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:21.990168   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:21.990191   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:21.990202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:22.094757   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:22.094797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:22.152247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:22.152280   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:22.205505   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:22.205543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:24.720784   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:24.734305   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:24.734369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:24.771142   65164 cri.go:89] found id: ""
	I0920 22:23:24.771171   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.771182   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:24.771190   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:24.771248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:24.807182   65164 cri.go:89] found id: ""
	I0920 22:23:24.807208   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.807216   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:24.807221   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:24.807278   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:24.840805   65164 cri.go:89] found id: ""
	I0920 22:23:24.840836   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.840847   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:24.840854   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:24.840910   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:24.877583   65164 cri.go:89] found id: ""
	I0920 22:23:24.877609   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.877617   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:24.877623   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:24.877677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:24.913485   65164 cri.go:89] found id: ""
	I0920 22:23:24.913518   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.913529   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:24.913536   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:24.913594   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:24.949441   65164 cri.go:89] found id: ""
	I0920 22:23:24.949466   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.949475   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:24.949481   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:24.949527   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:24.985355   65164 cri.go:89] found id: ""
	I0920 22:23:24.985377   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.985386   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:24.985391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:24.985442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:25.021343   65164 cri.go:89] found id: ""
	I0920 22:23:25.021367   65164 logs.go:276] 0 containers: []
	W0920 22:23:25.021375   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:25.021383   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:25.021393   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:25.063475   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:25.063509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:25.113397   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:25.113436   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:25.127946   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:25.127981   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:25.202313   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:25.202334   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:25.202349   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:23.431756   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:25.929621   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:24.778308   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.277297   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:25.865413   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.866883   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.784353   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:27.800363   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:27.800439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:27.861988   65164 cri.go:89] found id: ""
	I0920 22:23:27.862016   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.862026   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:27.862034   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:27.862093   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:27.925087   65164 cri.go:89] found id: ""
	I0920 22:23:27.925114   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.925126   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:27.925133   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:27.925194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:27.959757   65164 cri.go:89] found id: ""
	I0920 22:23:27.959785   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.959796   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:27.959804   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:27.959857   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:27.998853   65164 cri.go:89] found id: ""
	I0920 22:23:27.998881   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.998891   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:27.998898   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:27.998955   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:28.034316   65164 cri.go:89] found id: ""
	I0920 22:23:28.034338   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.034347   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:28.034352   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:28.034402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:28.068834   65164 cri.go:89] found id: ""
	I0920 22:23:28.068857   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.068865   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:28.068871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:28.068921   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:28.103599   65164 cri.go:89] found id: ""
	I0920 22:23:28.103631   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.103642   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:28.103649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:28.103713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:28.139211   65164 cri.go:89] found id: ""
	I0920 22:23:28.139235   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.139243   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:28.139250   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:28.139261   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:28.191788   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:28.191825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:28.208453   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:28.208487   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:28.280469   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:28.280488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:28.280500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:28.357234   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:28.357277   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:27.931829   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.430915   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:29.278181   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:31.777506   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.365534   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:32.366025   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:34.864764   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.903301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:30.916515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:30.916595   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:30.950403   65164 cri.go:89] found id: ""
	I0920 22:23:30.950427   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.950435   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:30.950442   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:30.950488   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:30.987361   65164 cri.go:89] found id: ""
	I0920 22:23:30.987383   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.987392   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:30.987398   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:30.987447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:31.025688   65164 cri.go:89] found id: ""
	I0920 22:23:31.025712   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.025720   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:31.025727   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:31.025789   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:31.064227   65164 cri.go:89] found id: ""
	I0920 22:23:31.064253   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.064261   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:31.064268   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:31.064313   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:31.100029   65164 cri.go:89] found id: ""
	I0920 22:23:31.100059   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.100067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:31.100073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:31.100118   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:31.134290   65164 cri.go:89] found id: ""
	I0920 22:23:31.134328   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.134339   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:31.134346   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:31.134404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:31.174453   65164 cri.go:89] found id: ""
	I0920 22:23:31.174483   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.174493   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:31.174500   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:31.174558   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:31.209768   65164 cri.go:89] found id: ""
	I0920 22:23:31.209792   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.209801   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:31.209812   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:31.209823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:31.288640   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:31.288674   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:31.329681   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:31.329707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:31.388046   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:31.388090   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:31.402872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:31.402900   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:31.476528   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:33.977657   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:33.991331   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:33.991407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:34.027932   65164 cri.go:89] found id: ""
	I0920 22:23:34.027960   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.027970   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:34.027979   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:34.028036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:34.070369   65164 cri.go:89] found id: ""
	I0920 22:23:34.070395   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.070403   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:34.070411   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:34.070471   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:34.107467   65164 cri.go:89] found id: ""
	I0920 22:23:34.107493   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.107501   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:34.107507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:34.107556   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:34.148177   65164 cri.go:89] found id: ""
	I0920 22:23:34.148203   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.148211   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:34.148217   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:34.148274   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:34.185721   65164 cri.go:89] found id: ""
	I0920 22:23:34.185751   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.185762   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:34.185775   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:34.185834   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:34.221089   65164 cri.go:89] found id: ""
	I0920 22:23:34.221110   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.221118   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:34.221124   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:34.221167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:34.256222   65164 cri.go:89] found id: ""
	I0920 22:23:34.256244   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.256252   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:34.256257   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:34.256305   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:34.291640   65164 cri.go:89] found id: ""
	I0920 22:23:34.291665   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.291676   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:34.291688   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:34.291702   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:34.342602   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:34.342642   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:34.356770   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:34.356798   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:34.425461   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:34.425488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:34.425503   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:34.505552   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:34.505591   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:32.932537   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:35.433994   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:33.778173   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:35.778427   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:37.778562   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:36.866421   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:39.364982   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:37.045098   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:37.058954   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:37.059047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:37.094828   65164 cri.go:89] found id: ""
	I0920 22:23:37.094857   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.094867   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:37.094875   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:37.094935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:37.133943   65164 cri.go:89] found id: ""
	I0920 22:23:37.133975   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.133985   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:37.133996   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:37.134056   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:37.171931   65164 cri.go:89] found id: ""
	I0920 22:23:37.171976   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.171988   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:37.171996   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:37.172064   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:37.209342   65164 cri.go:89] found id: ""
	I0920 22:23:37.209367   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.209375   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:37.209381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:37.209431   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:37.250832   65164 cri.go:89] found id: ""
	I0920 22:23:37.250866   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.250877   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:37.250885   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:37.250984   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:37.292507   65164 cri.go:89] found id: ""
	I0920 22:23:37.292533   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.292541   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:37.292547   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:37.292603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:37.326576   65164 cri.go:89] found id: ""
	I0920 22:23:37.326604   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.326612   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:37.326618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:37.326675   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:37.362983   65164 cri.go:89] found id: ""
	I0920 22:23:37.363010   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.363020   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:37.363030   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:37.363043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:37.418526   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:37.418564   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:37.434155   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:37.434180   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:37.521264   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:37.521292   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:37.521303   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:37.603628   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:37.603663   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:40.146295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:40.160861   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:40.160932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:40.194739   65164 cri.go:89] found id: ""
	I0920 22:23:40.194762   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.194769   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:40.194776   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:40.194836   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:40.230753   65164 cri.go:89] found id: ""
	I0920 22:23:40.230781   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.230792   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:40.230800   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:40.230860   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:40.268363   65164 cri.go:89] found id: ""
	I0920 22:23:40.268392   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.268403   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:40.268411   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:40.268468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:40.307882   65164 cri.go:89] found id: ""
	I0920 22:23:40.307916   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.307925   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:40.307932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:40.307993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:40.344334   65164 cri.go:89] found id: ""
	I0920 22:23:40.344367   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.344378   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:40.344385   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:40.344445   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:40.385113   65164 cri.go:89] found id: ""
	I0920 22:23:40.385146   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.385157   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:40.385168   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:40.385269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:40.422532   65164 cri.go:89] found id: ""
	I0920 22:23:40.422557   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.422568   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:40.422576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:40.422635   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:40.463425   65164 cri.go:89] found id: ""
	I0920 22:23:40.463451   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.463462   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:40.463472   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:40.463486   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:40.518314   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:40.518350   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:40.534689   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:40.534718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:40.610484   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:40.610509   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:40.610525   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:40.697040   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:40.697076   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:37.930996   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:40.429855   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:42.431458   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:40.279291   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:42.778158   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:41.366031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:43.367298   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:43.240502   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:43.254888   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:43.254989   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:43.294646   65164 cri.go:89] found id: ""
	I0920 22:23:43.294672   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.294686   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:43.294699   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:43.294763   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:43.330802   65164 cri.go:89] found id: ""
	I0920 22:23:43.330828   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.330839   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:43.330846   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:43.330911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:43.369668   65164 cri.go:89] found id: ""
	I0920 22:23:43.369692   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.369702   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:43.369710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:43.369780   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:43.409992   65164 cri.go:89] found id: ""
	I0920 22:23:43.410032   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.410045   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:43.410054   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:43.410116   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:43.451838   65164 cri.go:89] found id: ""
	I0920 22:23:43.451870   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.451882   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:43.451889   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:43.451945   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:43.489973   65164 cri.go:89] found id: ""
	I0920 22:23:43.490010   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.490021   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:43.490027   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:43.490087   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:43.529027   65164 cri.go:89] found id: ""
	I0920 22:23:43.529051   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.529060   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:43.529066   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:43.529121   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:43.567324   65164 cri.go:89] found id: ""
	I0920 22:23:43.567354   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.567366   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:43.567377   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:43.567390   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:43.618859   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:43.618899   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:43.632929   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:43.632970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:43.713084   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:43.713108   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:43.713123   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:43.795977   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:43.796018   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:44.931817   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:46.940596   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:44.778817   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:47.277419   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:45.866458   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:48.365654   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:46.337737   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:46.352031   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:46.352117   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:46.391292   65164 cri.go:89] found id: ""
	I0920 22:23:46.391316   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.391328   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:46.391335   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:46.391397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:46.432554   65164 cri.go:89] found id: ""
	I0920 22:23:46.432576   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.432584   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:46.432590   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:46.432669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:46.469609   65164 cri.go:89] found id: ""
	I0920 22:23:46.469638   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.469649   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:46.469657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:46.469718   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:46.509671   65164 cri.go:89] found id: ""
	I0920 22:23:46.509702   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.509714   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:46.509721   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:46.509791   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:46.549707   65164 cri.go:89] found id: ""
	I0920 22:23:46.549737   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.549747   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:46.549755   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:46.549825   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:46.591886   65164 cri.go:89] found id: ""
	I0920 22:23:46.591914   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.591924   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:46.591931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:46.591982   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:46.629358   65164 cri.go:89] found id: ""
	I0920 22:23:46.629384   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.629391   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:46.629397   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:46.629447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:46.664854   65164 cri.go:89] found id: ""
	I0920 22:23:46.664880   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.664888   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:46.664897   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:46.664908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:46.706941   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:46.706994   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:46.760617   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:46.760651   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:46.774526   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:46.774552   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:46.841947   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:46.841977   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:46.841995   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.423720   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:49.439375   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:49.439466   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:49.476032   65164 cri.go:89] found id: ""
	I0920 22:23:49.476056   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.476064   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:49.476070   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:49.476129   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:49.513099   65164 cri.go:89] found id: ""
	I0920 22:23:49.513128   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.513138   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:49.513144   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:49.513194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:49.551645   65164 cri.go:89] found id: ""
	I0920 22:23:49.551671   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.551682   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:49.551689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:49.551750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:49.592269   65164 cri.go:89] found id: ""
	I0920 22:23:49.592295   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.592305   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:49.592312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:49.592371   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:49.629934   65164 cri.go:89] found id: ""
	I0920 22:23:49.629963   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.629971   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:49.629978   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:49.630033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:49.668013   65164 cri.go:89] found id: ""
	I0920 22:23:49.668041   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.668051   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:49.668058   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:49.668119   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:49.706531   65164 cri.go:89] found id: ""
	I0920 22:23:49.706561   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.706572   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:49.706580   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:49.706644   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:49.744130   65164 cri.go:89] found id: ""
	I0920 22:23:49.744168   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.744180   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:49.744192   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:49.744212   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:49.814987   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:49.815015   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:49.815031   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.897468   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:49.897505   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:49.936778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:49.936804   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:49.990840   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:49.990884   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:49.430203   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:51.930280   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:49.777628   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.278385   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:50.864440   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.865694   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.505554   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:52.520975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:52.521043   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:52.565302   65164 cri.go:89] found id: ""
	I0920 22:23:52.565331   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.565340   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:52.565346   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:52.565404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:52.606315   65164 cri.go:89] found id: ""
	I0920 22:23:52.606342   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.606352   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:52.606359   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:52.606416   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:52.641533   65164 cri.go:89] found id: ""
	I0920 22:23:52.641560   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.641574   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:52.641581   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:52.641640   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:52.677372   65164 cri.go:89] found id: ""
	I0920 22:23:52.677400   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.677412   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:52.677419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:52.677479   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:52.714016   65164 cri.go:89] found id: ""
	I0920 22:23:52.714056   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.714067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:52.714075   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:52.714131   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:52.748194   65164 cri.go:89] found id: ""
	I0920 22:23:52.748224   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.748235   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:52.748243   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:52.748306   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:52.785329   65164 cri.go:89] found id: ""
	I0920 22:23:52.785356   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.785366   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:52.785373   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:52.785430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:52.818350   65164 cri.go:89] found id: ""
	I0920 22:23:52.818382   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.818401   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:52.818414   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:52.818429   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:52.862819   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:52.862854   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:52.914773   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:52.914811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:52.928560   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:52.928589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:53.005530   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:53.005553   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:53.005568   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:55.589457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:55.604545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:55.604634   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:55.643361   65164 cri.go:89] found id: ""
	I0920 22:23:55.643385   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.643394   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:55.643404   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:55.643458   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:55.678738   65164 cri.go:89] found id: ""
	I0920 22:23:55.678762   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.678770   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:55.678775   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:55.678820   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:55.714107   65164 cri.go:89] found id: ""
	I0920 22:23:55.714137   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.714153   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:55.714160   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:55.714221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:55.749093   65164 cri.go:89] found id: ""
	I0920 22:23:55.749118   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.749127   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:55.749135   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:55.749194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:55.786138   65164 cri.go:89] found id: ""
	I0920 22:23:55.786166   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.786177   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:55.786185   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:55.786243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:55.820406   65164 cri.go:89] found id: ""
	I0920 22:23:55.820431   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.820440   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:55.820446   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:55.820509   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:55.854792   65164 cri.go:89] found id: ""
	I0920 22:23:55.854820   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.854829   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:55.854834   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:55.854889   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:53.931782   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:56.430505   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:54.278800   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:56.779261   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:55.368778   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:57.865901   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:55.891926   65164 cri.go:89] found id: ""
	I0920 22:23:55.891957   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.891967   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:55.891977   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:55.891991   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:55.947642   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:55.947677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:55.961872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:55.961903   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:56.038004   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:56.038035   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:56.038049   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:56.129637   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:56.129682   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:58.670325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:58.685501   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:58.685564   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:58.728232   65164 cri.go:89] found id: ""
	I0920 22:23:58.728262   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.728274   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:58.728282   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:58.728339   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:58.762617   65164 cri.go:89] found id: ""
	I0920 22:23:58.762647   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.762657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:58.762665   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:58.762727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:58.799427   65164 cri.go:89] found id: ""
	I0920 22:23:58.799454   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.799464   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:58.799471   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:58.799533   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:58.840100   65164 cri.go:89] found id: ""
	I0920 22:23:58.840126   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.840133   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:58.840140   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:58.840202   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:58.879097   65164 cri.go:89] found id: ""
	I0920 22:23:58.879123   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.879131   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:58.879137   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:58.879185   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:58.914839   65164 cri.go:89] found id: ""
	I0920 22:23:58.914879   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.914889   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:58.914897   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:58.914954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:58.957890   65164 cri.go:89] found id: ""
	I0920 22:23:58.957917   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.957925   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:58.957931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:58.957992   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:58.996355   65164 cri.go:89] found id: ""
	I0920 22:23:58.996380   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.996390   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:58.996400   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:58.996419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:59.010139   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:59.010162   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:59.083543   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:59.083567   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:59.083584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:59.158430   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:59.158470   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:59.199488   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:59.199515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:58.430584   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:00.431120   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:59.278671   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:01.777131   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:00.366154   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:02.866892   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:01.755931   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:01.770610   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:01.770684   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:01.806498   65164 cri.go:89] found id: ""
	I0920 22:24:01.806532   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.806544   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:01.806552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:01.806611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:01.849461   65164 cri.go:89] found id: ""
	I0920 22:24:01.849495   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.849505   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:01.849512   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:01.849570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:01.886634   65164 cri.go:89] found id: ""
	I0920 22:24:01.886674   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.886686   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:01.886693   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:01.886756   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:01.921036   65164 cri.go:89] found id: ""
	I0920 22:24:01.921064   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.921072   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:01.921079   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:01.921138   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:01.959066   65164 cri.go:89] found id: ""
	I0920 22:24:01.959097   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.959109   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:01.959116   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:01.959182   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:01.997105   65164 cri.go:89] found id: ""
	I0920 22:24:01.997135   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.997146   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:01.997155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:01.997215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:02.039220   65164 cri.go:89] found id: ""
	I0920 22:24:02.039245   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.039254   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:02.039259   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:02.039318   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:02.080646   65164 cri.go:89] found id: ""
	I0920 22:24:02.080675   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.080684   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:02.080691   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:02.080704   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:02.132408   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:02.132442   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:02.147039   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:02.147065   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:02.226357   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:02.226383   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:02.226402   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:02.310492   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:02.310527   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:04.854325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:04.869319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:04.869386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:04.904951   65164 cri.go:89] found id: ""
	I0920 22:24:04.904981   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.904992   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:04.905001   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:04.905062   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:04.942283   65164 cri.go:89] found id: ""
	I0920 22:24:04.942307   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.942314   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:04.942319   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:04.942366   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:04.981814   65164 cri.go:89] found id: ""
	I0920 22:24:04.981841   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.981851   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:04.981858   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:04.981914   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:05.016282   65164 cri.go:89] found id: ""
	I0920 22:24:05.016307   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.016316   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:05.016321   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:05.016369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:05.059185   65164 cri.go:89] found id: ""
	I0920 22:24:05.059215   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.059226   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:05.059233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:05.059286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:05.097814   65164 cri.go:89] found id: ""
	I0920 22:24:05.097836   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.097845   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:05.097851   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:05.097904   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:05.133890   65164 cri.go:89] found id: ""
	I0920 22:24:05.133918   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.133926   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:05.133932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:05.133993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:05.170616   65164 cri.go:89] found id: ""
	I0920 22:24:05.170641   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.170649   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:05.170657   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:05.170668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:05.221889   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:05.221921   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:05.235836   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:05.235863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:05.312145   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:05.312172   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:05.312188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:05.397092   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:05.397128   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:02.930186   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:04.931380   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.429840   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:03.778224   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:06.278765   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:08.279756   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:05.365583   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.865287   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:09.865560   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.938892   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:07.954037   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:07.954103   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:07.992847   65164 cri.go:89] found id: ""
	I0920 22:24:07.992872   65164 logs.go:276] 0 containers: []
	W0920 22:24:07.992880   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:07.992887   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:07.992971   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:08.030102   65164 cri.go:89] found id: ""
	I0920 22:24:08.030136   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.030148   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:08.030155   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:08.030215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:08.071156   65164 cri.go:89] found id: ""
	I0920 22:24:08.071187   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.071202   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:08.071209   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:08.071264   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:08.111668   65164 cri.go:89] found id: ""
	I0920 22:24:08.111699   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.111712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:08.111719   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:08.111777   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:08.148289   65164 cri.go:89] found id: ""
	I0920 22:24:08.148322   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.148333   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:08.148340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:08.148402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:08.189154   65164 cri.go:89] found id: ""
	I0920 22:24:08.189176   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.189184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:08.189191   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:08.189242   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:08.225338   65164 cri.go:89] found id: ""
	I0920 22:24:08.225363   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.225374   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:08.225381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:08.225439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:08.274986   65164 cri.go:89] found id: ""
	I0920 22:24:08.275015   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.275026   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:08.275036   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:08.275052   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:08.376769   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:08.376801   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:08.376819   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:08.455971   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:08.456008   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:08.496996   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:08.497026   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:08.549634   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:08.549669   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:09.930508   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:12.430716   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:10.778077   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:13.278116   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:12.365884   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:14.866612   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:11.065317   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:11.079438   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:11.079521   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:11.113628   65164 cri.go:89] found id: ""
	I0920 22:24:11.113657   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.113673   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:11.113680   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:11.113741   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:11.147698   65164 cri.go:89] found id: ""
	I0920 22:24:11.147727   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.147738   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:11.147745   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:11.147810   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:11.185153   65164 cri.go:89] found id: ""
	I0920 22:24:11.185183   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.185194   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:11.185202   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:11.185261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:11.220575   65164 cri.go:89] found id: ""
	I0920 22:24:11.220602   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.220611   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:11.220619   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:11.220677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:11.257062   65164 cri.go:89] found id: ""
	I0920 22:24:11.257095   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.257107   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:11.257117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:11.257174   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:11.295945   65164 cri.go:89] found id: ""
	I0920 22:24:11.295972   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.295982   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:11.295987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:11.296037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:11.335049   65164 cri.go:89] found id: ""
	I0920 22:24:11.335072   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.335080   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:11.335085   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:11.335134   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:11.377047   65164 cri.go:89] found id: ""
	I0920 22:24:11.377074   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.377085   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:11.377094   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:11.377108   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:11.430338   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:11.430364   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:11.445225   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:11.445249   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:11.517988   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:11.518010   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:11.518024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:11.596776   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:11.596813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:14.140982   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:14.158445   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:14.158524   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:14.208478   65164 cri.go:89] found id: ""
	I0920 22:24:14.208508   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.208518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:14.208526   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:14.208585   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:14.266224   65164 cri.go:89] found id: ""
	I0920 22:24:14.266256   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.266268   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:14.266276   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:14.266334   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:14.309917   65164 cri.go:89] found id: ""
	I0920 22:24:14.309943   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.309954   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:14.309968   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:14.310026   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:14.348491   65164 cri.go:89] found id: ""
	I0920 22:24:14.348516   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.348523   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:14.348530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:14.348575   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:14.389170   65164 cri.go:89] found id: ""
	I0920 22:24:14.389197   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.389205   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:14.389212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:14.389261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:14.426006   65164 cri.go:89] found id: ""
	I0920 22:24:14.426034   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.426045   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:14.426052   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:14.426113   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:14.461496   65164 cri.go:89] found id: ""
	I0920 22:24:14.461519   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.461526   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:14.461532   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:14.461588   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:14.501424   65164 cri.go:89] found id: ""
	I0920 22:24:14.501457   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.501474   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:14.501486   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:14.501499   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:14.550920   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:14.550954   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:14.565293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:14.565322   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:14.638031   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:14.638059   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:14.638071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:14.715631   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:14.715697   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:14.430748   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:16.930571   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:15.777268   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.778385   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.365248   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:19.366168   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.256947   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:17.269939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:17.270009   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:17.305349   65164 cri.go:89] found id: ""
	I0920 22:24:17.305373   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.305384   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:17.305391   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:17.305453   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:17.347917   65164 cri.go:89] found id: ""
	I0920 22:24:17.347947   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.347959   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:17.347967   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:17.348024   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:17.383366   65164 cri.go:89] found id: ""
	I0920 22:24:17.383387   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.383396   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:17.383402   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:17.383462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:17.418055   65164 cri.go:89] found id: ""
	I0920 22:24:17.418079   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.418086   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:17.418092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:17.418156   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:17.457651   65164 cri.go:89] found id: ""
	I0920 22:24:17.457680   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.457691   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:17.457698   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:17.457758   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:17.492527   65164 cri.go:89] found id: ""
	I0920 22:24:17.492557   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.492568   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:17.492576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:17.492627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:17.528964   65164 cri.go:89] found id: ""
	I0920 22:24:17.528993   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.529004   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:17.529017   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:17.529080   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:17.565286   65164 cri.go:89] found id: ""
	I0920 22:24:17.565315   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.565326   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:17.565336   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:17.565351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:17.617255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:17.617290   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:17.631897   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:17.631925   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:17.700829   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:17.700851   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:17.700863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:17.782597   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:17.782633   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.327305   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:20.340641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:20.340731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:20.378129   65164 cri.go:89] found id: ""
	I0920 22:24:20.378156   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.378167   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:20.378174   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:20.378228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:20.414202   65164 cri.go:89] found id: ""
	I0920 22:24:20.414230   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.414240   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:20.414247   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:20.414307   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:20.451599   65164 cri.go:89] found id: ""
	I0920 22:24:20.451631   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.451641   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:20.451649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:20.451708   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:20.486998   65164 cri.go:89] found id: ""
	I0920 22:24:20.487026   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.487034   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:20.487040   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:20.487091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:20.525623   65164 cri.go:89] found id: ""
	I0920 22:24:20.525651   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.525661   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:20.525669   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:20.525727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:20.560930   65164 cri.go:89] found id: ""
	I0920 22:24:20.560956   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.560967   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:20.560975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:20.561037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:20.598358   65164 cri.go:89] found id: ""
	I0920 22:24:20.598385   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.598396   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:20.598403   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:20.598468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:20.633645   65164 cri.go:89] found id: ""
	I0920 22:24:20.633671   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.633683   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:20.633694   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:20.633707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.678847   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:20.678880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:20.731758   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:20.731803   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:20.747298   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:20.747328   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:20.822433   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:20.822459   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:20.822473   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:18.932797   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:21.430477   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:19.778546   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:22.277786   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:21.866777   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:24.365161   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:23.401037   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:23.414055   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:23.414135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:23.452483   65164 cri.go:89] found id: ""
	I0920 22:24:23.452511   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.452522   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:23.452529   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:23.452587   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:23.491951   65164 cri.go:89] found id: ""
	I0920 22:24:23.491979   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.491987   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:23.491993   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:23.492039   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:23.533381   65164 cri.go:89] found id: ""
	I0920 22:24:23.533413   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.533424   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:23.533432   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:23.533492   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:23.572442   65164 cri.go:89] found id: ""
	I0920 22:24:23.572475   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.572485   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:23.572493   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:23.572554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:23.609161   65164 cri.go:89] found id: ""
	I0920 22:24:23.609189   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.609200   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:23.609207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:23.609272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:23.645692   65164 cri.go:89] found id: ""
	I0920 22:24:23.645721   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.645730   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:23.645737   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:23.645824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:23.681332   65164 cri.go:89] found id: ""
	I0920 22:24:23.681355   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.681363   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:23.681368   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:23.681414   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:23.716497   65164 cri.go:89] found id: ""
	I0920 22:24:23.716524   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.716532   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:23.716541   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:23.716551   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:23.794446   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:23.794485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:23.839598   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:23.839625   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:23.893259   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:23.893295   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:23.907430   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:23.907456   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:23.982193   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:23.431805   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:25.430647   64543 pod_ready.go:82] duration metric: took 4m0.006693945s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	E0920 22:24:25.430668   64543 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 22:24:25.430675   64543 pod_ready.go:39] duration metric: took 4m7.427029914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:24:25.430689   64543 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:24:25.430712   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:25.430753   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:25.478295   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:25.478324   64543 cri.go:89] found id: ""
	I0920 22:24:25.478334   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:25.478395   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.483530   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:25.483600   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:25.525758   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:25.525791   64543 cri.go:89] found id: ""
	I0920 22:24:25.525801   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:25.525855   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.530200   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:25.530272   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:25.581889   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:25.581911   64543 cri.go:89] found id: ""
	I0920 22:24:25.581920   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:25.581976   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.587886   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:25.587948   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:25.634412   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:25.634438   64543 cri.go:89] found id: ""
	I0920 22:24:25.634448   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:25.634508   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.638844   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:25.638907   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:25.674015   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:25.674040   64543 cri.go:89] found id: ""
	I0920 22:24:25.674049   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:25.674109   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.678865   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:25.678935   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:25.715353   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:25.715380   64543 cri.go:89] found id: ""
	I0920 22:24:25.715390   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:25.715445   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.719967   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:25.720038   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:25.759899   64543 cri.go:89] found id: ""
	I0920 22:24:25.759925   64543 logs.go:276] 0 containers: []
	W0920 22:24:25.759933   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:25.759938   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:25.759996   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:25.805111   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:25.805139   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:25.805146   64543 cri.go:89] found id: ""
	I0920 22:24:25.805154   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:25.805225   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.809639   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.813697   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:25.813722   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:25.853582   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:25.853610   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:25.893403   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:25.893450   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:25.939559   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:25.939601   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:25.957288   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:25.957322   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:26.007398   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:26.007427   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:26.048801   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:26.048828   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:26.105292   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:26.105327   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:26.141333   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:26.141367   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:26.659413   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:26.659458   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:26.719990   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:26.720018   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:26.797434   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:26.797480   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:26.956602   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:26.956645   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:24.278169   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.278275   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:28.278844   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.365203   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:28.366087   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.482877   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:26.497511   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:26.497590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:26.534711   65164 cri.go:89] found id: ""
	I0920 22:24:26.534732   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.534739   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:26.534745   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:26.534790   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:26.569664   65164 cri.go:89] found id: ""
	I0920 22:24:26.569696   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.569706   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:26.569714   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:26.569772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:26.610623   65164 cri.go:89] found id: ""
	I0920 22:24:26.610651   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.610663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:26.610671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:26.610726   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:26.648337   65164 cri.go:89] found id: ""
	I0920 22:24:26.648368   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.648379   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:26.648387   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:26.648446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:26.687633   65164 cri.go:89] found id: ""
	I0920 22:24:26.687661   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.687673   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:26.687680   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:26.687752   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:26.724361   65164 cri.go:89] found id: ""
	I0920 22:24:26.724390   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.724410   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:26.724419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:26.724475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:26.766101   65164 cri.go:89] found id: ""
	I0920 22:24:26.766127   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.766137   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:26.766144   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:26.766196   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:26.805844   65164 cri.go:89] found id: ""
	I0920 22:24:26.805872   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.805883   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:26.805894   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:26.805907   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:26.880703   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:26.880725   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:26.880740   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:26.967599   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:26.967635   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:27.017359   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:27.017384   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:27.071335   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:27.071375   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:29.588374   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:29.602233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:29.602308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:29.639528   65164 cri.go:89] found id: ""
	I0920 22:24:29.639551   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.639562   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:29.639570   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:29.639626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:29.677766   65164 cri.go:89] found id: ""
	I0920 22:24:29.677794   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.677810   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:29.677817   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:29.677874   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:29.714616   65164 cri.go:89] found id: ""
	I0920 22:24:29.714652   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.714663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:29.714671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:29.714722   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:29.750760   65164 cri.go:89] found id: ""
	I0920 22:24:29.750787   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.750795   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:29.750802   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:29.750862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:29.791014   65164 cri.go:89] found id: ""
	I0920 22:24:29.791046   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.791057   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:29.791064   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:29.791135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:29.827114   65164 cri.go:89] found id: ""
	I0920 22:24:29.827146   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.827158   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:29.827166   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:29.827221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:29.866063   65164 cri.go:89] found id: ""
	I0920 22:24:29.866087   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.866097   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:29.866105   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:29.866159   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:29.905616   65164 cri.go:89] found id: ""
	I0920 22:24:29.905646   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.905657   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:29.905668   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:29.905681   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:29.990749   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:29.990787   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:30.030931   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:30.030975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:30.084404   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:30.084440   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:30.100878   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:30.100908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:30.175423   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:29.517258   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:29.535431   64543 api_server.go:72] duration metric: took 4m18.790198719s to wait for apiserver process to appear ...
	I0920 22:24:29.535457   64543 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:24:29.535493   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:29.535549   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:29.581802   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:29.581828   64543 cri.go:89] found id: ""
	I0920 22:24:29.581836   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:29.581890   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.590758   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:29.590827   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:29.636548   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:29.636574   64543 cri.go:89] found id: ""
	I0920 22:24:29.636583   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:29.636644   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.641309   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:29.641378   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:29.686989   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:29.687017   64543 cri.go:89] found id: ""
	I0920 22:24:29.687026   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:29.687086   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.691971   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:29.692056   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:29.733852   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:29.733880   64543 cri.go:89] found id: ""
	I0920 22:24:29.733890   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:29.733954   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.738246   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:29.738322   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:29.786495   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:29.786520   64543 cri.go:89] found id: ""
	I0920 22:24:29.786527   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:29.786580   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.791112   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:29.791170   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:29.836075   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:29.836098   64543 cri.go:89] found id: ""
	I0920 22:24:29.836106   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:29.836150   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.840577   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:29.840652   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:29.882111   64543 cri.go:89] found id: ""
	I0920 22:24:29.882142   64543 logs.go:276] 0 containers: []
	W0920 22:24:29.882154   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:29.882161   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:29.882247   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:29.926138   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:29.926163   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:29.926169   64543 cri.go:89] found id: ""
	I0920 22:24:29.926177   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:29.926235   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.930772   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.934923   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:29.934952   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:30.052134   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:30.052171   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:30.101772   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:30.101795   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:30.143728   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:30.143760   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:30.189176   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:30.189204   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:30.246393   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:30.246423   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:30.289832   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:30.289864   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:30.712613   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:30.712648   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:30.783930   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:30.783962   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:30.838078   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:30.838112   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:30.889200   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:30.889235   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:30.924455   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:30.924492   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:30.961916   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:30.961945   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:30.278927   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.777947   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:30.867260   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.868094   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.675815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:32.689449   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:32.689517   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:32.725045   65164 cri.go:89] found id: ""
	I0920 22:24:32.725072   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.725080   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:32.725086   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:32.725139   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:32.760529   65164 cri.go:89] found id: ""
	I0920 22:24:32.760555   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.760566   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:32.760572   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:32.760629   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:32.795500   65164 cri.go:89] found id: ""
	I0920 22:24:32.795526   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.795536   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:32.795543   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:32.795604   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:32.834854   65164 cri.go:89] found id: ""
	I0920 22:24:32.834884   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.834895   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:32.834904   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:32.834972   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:32.875928   65164 cri.go:89] found id: ""
	I0920 22:24:32.875956   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.875967   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:32.875975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:32.876038   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:32.910115   65164 cri.go:89] found id: ""
	I0920 22:24:32.910140   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.910149   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:32.910155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:32.910204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:32.944790   65164 cri.go:89] found id: ""
	I0920 22:24:32.944817   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.944828   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:32.944835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:32.944896   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:32.980012   65164 cri.go:89] found id: ""
	I0920 22:24:32.980043   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.980054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:32.980064   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:32.980077   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:33.032827   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:33.032864   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:33.047478   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:33.047507   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:33.124253   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:33.124280   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:33.124294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:33.200625   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:33.200666   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:35.740556   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:35.755559   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:35.755626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:35.796164   65164 cri.go:89] found id: ""
	I0920 22:24:35.796191   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.796200   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:35.796211   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:35.796260   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:35.832565   65164 cri.go:89] found id: ""
	I0920 22:24:35.832591   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.832601   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:35.832609   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:35.832678   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:33.481353   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:24:33.486802   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 200:
	ok
	I0920 22:24:33.487816   64543 api_server.go:141] control plane version: v1.31.1
	I0920 22:24:33.487835   64543 api_server.go:131] duration metric: took 3.952371649s to wait for apiserver health ...
	I0920 22:24:33.487842   64543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:24:33.487865   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:33.487917   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:33.528958   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:33.528989   64543 cri.go:89] found id: ""
	I0920 22:24:33.528999   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:33.529061   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.533727   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:33.533802   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:33.573912   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:33.573937   64543 cri.go:89] found id: ""
	I0920 22:24:33.573947   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:33.574016   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.578918   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:33.578998   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:33.628384   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:33.628416   64543 cri.go:89] found id: ""
	I0920 22:24:33.628427   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:33.628490   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.632875   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:33.632955   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:33.674561   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:33.674588   64543 cri.go:89] found id: ""
	I0920 22:24:33.674597   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:33.674654   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.679984   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:33.680055   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:33.715881   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:33.715906   64543 cri.go:89] found id: ""
	I0920 22:24:33.715914   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:33.715963   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.720339   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:33.720408   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:33.764661   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:33.764680   64543 cri.go:89] found id: ""
	I0920 22:24:33.764687   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:33.764734   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.769479   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:33.769533   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:33.810893   64543 cri.go:89] found id: ""
	I0920 22:24:33.810920   64543 logs.go:276] 0 containers: []
	W0920 22:24:33.810932   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:33.810950   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:33.811023   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:33.848882   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:33.848904   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:33.848908   64543 cri.go:89] found id: ""
	I0920 22:24:33.848914   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:33.848964   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.853597   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.858054   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:33.858074   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:33.908602   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:33.908636   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:33.953823   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:33.953850   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:33.993841   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:33.993868   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:34.009784   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:34.009808   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:34.134468   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:34.134496   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:34.178737   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:34.178772   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:34.218475   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:34.218519   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:34.261246   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:34.261272   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:34.324980   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:34.325018   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:34.364223   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:34.364249   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:34.730213   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:34.730262   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:34.808747   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:34.808786   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:37.357880   64543 system_pods.go:59] 8 kube-system pods found
	I0920 22:24:37.357914   64543 system_pods.go:61] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running
	I0920 22:24:37.357919   64543 system_pods.go:61] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running
	I0920 22:24:37.357923   64543 system_pods.go:61] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running
	I0920 22:24:37.357928   64543 system_pods.go:61] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running
	I0920 22:24:37.357933   64543 system_pods.go:61] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:24:37.357938   64543 system_pods.go:61] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running
	I0920 22:24:37.357946   64543 system_pods.go:61] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:24:37.357960   64543 system_pods.go:61] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:24:37.357969   64543 system_pods.go:74] duration metric: took 3.870121418s to wait for pod list to return data ...
	I0920 22:24:37.357979   64543 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:24:37.361128   64543 default_sa.go:45] found service account: "default"
	I0920 22:24:37.361152   64543 default_sa.go:55] duration metric: took 3.165479ms for default service account to be created ...
	I0920 22:24:37.361160   64543 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:24:37.366724   64543 system_pods.go:86] 8 kube-system pods found
	I0920 22:24:37.366755   64543 system_pods.go:89] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running
	I0920 22:24:37.366764   64543 system_pods.go:89] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running
	I0920 22:24:37.366771   64543 system_pods.go:89] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running
	I0920 22:24:37.366777   64543 system_pods.go:89] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running
	I0920 22:24:37.366781   64543 system_pods.go:89] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:24:37.366785   64543 system_pods.go:89] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running
	I0920 22:24:37.366791   64543 system_pods.go:89] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:24:37.366795   64543 system_pods.go:89] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:24:37.366803   64543 system_pods.go:126] duration metric: took 5.637987ms to wait for k8s-apps to be running ...
	I0920 22:24:37.366809   64543 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:24:37.366865   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:24:37.385292   64543 system_svc.go:56] duration metric: took 18.47174ms WaitForService to wait for kubelet
	I0920 22:24:37.385326   64543 kubeadm.go:582] duration metric: took 4m26.64009895s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:24:37.385352   64543 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:24:37.388635   64543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:24:37.388661   64543 node_conditions.go:123] node cpu capacity is 2
	I0920 22:24:37.388675   64543 node_conditions.go:105] duration metric: took 3.31714ms to run NodePressure ...
	I0920 22:24:37.388690   64543 start.go:241] waiting for startup goroutines ...
	I0920 22:24:37.388699   64543 start.go:246] waiting for cluster config update ...
	I0920 22:24:37.388717   64543 start.go:255] writing updated cluster config ...
	I0920 22:24:37.389073   64543 ssh_runner.go:195] Run: rm -f paused
	I0920 22:24:37.442347   64543 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:24:37.443855   64543 out.go:177] * Done! kubectl is now configured to use "embed-certs-562750" cluster and "default" namespace by default
	I0920 22:24:34.779041   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:36.779446   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:35.365819   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:37.865951   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:35.868115   65164 cri.go:89] found id: ""
	I0920 22:24:35.868138   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.868145   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:35.868152   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:35.868199   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:35.902417   65164 cri.go:89] found id: ""
	I0920 22:24:35.902447   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.902457   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:35.902465   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:35.902531   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:35.939195   65164 cri.go:89] found id: ""
	I0920 22:24:35.939226   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.939242   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:35.939256   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:35.939317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:35.981364   65164 cri.go:89] found id: ""
	I0920 22:24:35.981387   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.981397   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:35.981405   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:35.981462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:36.019938   65164 cri.go:89] found id: ""
	I0920 22:24:36.019968   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.019979   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:36.019987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:36.020049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:36.058764   65164 cri.go:89] found id: ""
	I0920 22:24:36.058788   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.058795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:36.058803   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:36.058812   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:36.098442   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:36.098468   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:36.152150   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:36.152188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:36.168940   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:36.168972   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:36.242112   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:36.242134   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:36.242148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:38.843420   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:38.857665   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:38.857746   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:38.901473   65164 cri.go:89] found id: ""
	I0920 22:24:38.901507   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.901518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:38.901532   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:38.901591   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:38.937980   65164 cri.go:89] found id: ""
	I0920 22:24:38.938012   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.938022   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:38.938030   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:38.938085   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:38.973476   65164 cri.go:89] found id: ""
	I0920 22:24:38.973506   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.973514   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:38.973520   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:38.973579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:39.009144   65164 cri.go:89] found id: ""
	I0920 22:24:39.009177   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.009197   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:39.009205   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:39.009282   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:39.046471   65164 cri.go:89] found id: ""
	I0920 22:24:39.046495   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.046505   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:39.046513   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:39.046570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:39.080274   65164 cri.go:89] found id: ""
	I0920 22:24:39.080303   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.080315   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:39.080323   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:39.080381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:39.117267   65164 cri.go:89] found id: ""
	I0920 22:24:39.117297   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.117307   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:39.117314   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:39.117381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:39.158015   65164 cri.go:89] found id: ""
	I0920 22:24:39.158043   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.158054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:39.158065   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:39.158079   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:39.210324   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:39.210360   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:39.226827   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:39.226857   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:39.299060   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:39.299088   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:39.299157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:39.386377   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:39.386420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:39.277971   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:41.278082   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:43.278136   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:40.365418   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:41.359063   64930 pod_ready.go:82] duration metric: took 4m0.000372202s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" ...
	E0920 22:24:41.359104   64930 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 22:24:41.359120   64930 pod_ready.go:39] duration metric: took 4m12.013463696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:24:41.359144   64930 kubeadm.go:597] duration metric: took 4m19.989072639s to restartPrimaryControlPlane
	W0920 22:24:41.359208   64930 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:24:41.359236   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:24:41.925762   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:41.939840   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:41.939911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:41.979910   65164 cri.go:89] found id: ""
	I0920 22:24:41.979943   65164 logs.go:276] 0 containers: []
	W0920 22:24:41.979964   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:41.979973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:41.980036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:42.018711   65164 cri.go:89] found id: ""
	I0920 22:24:42.018738   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.018747   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:42.018752   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:42.018796   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:42.056718   65164 cri.go:89] found id: ""
	I0920 22:24:42.056741   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.056749   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:42.056756   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:42.056826   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:42.093609   65164 cri.go:89] found id: ""
	I0920 22:24:42.093639   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.093650   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:42.093657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:42.093729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:42.130129   65164 cri.go:89] found id: ""
	I0920 22:24:42.130157   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.130167   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:42.130175   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:42.130233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:42.166747   65164 cri.go:89] found id: ""
	I0920 22:24:42.166796   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.166807   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:42.166815   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:42.166875   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:42.213926   65164 cri.go:89] found id: ""
	I0920 22:24:42.213960   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.213971   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:42.213982   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:42.214047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:42.251645   65164 cri.go:89] found id: ""
	I0920 22:24:42.251671   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.251681   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:42.251693   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:42.251707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:42.265133   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:42.265165   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:42.340018   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:42.340044   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:42.340058   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:42.435006   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:42.435056   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:42.478991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:42.479022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:45.032066   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:45.048658   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:45.048734   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:45.086708   65164 cri.go:89] found id: ""
	I0920 22:24:45.086733   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.086743   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:45.086750   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:45.086815   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:45.124311   65164 cri.go:89] found id: ""
	I0920 22:24:45.124342   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.124353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:45.124360   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:45.124407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:45.164253   65164 cri.go:89] found id: ""
	I0920 22:24:45.164283   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.164293   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:45.164301   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:45.164364   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:45.206382   65164 cri.go:89] found id: ""
	I0920 22:24:45.206412   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.206421   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:45.206429   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:45.206494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:45.241949   65164 cri.go:89] found id: ""
	I0920 22:24:45.241977   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.241986   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:45.241991   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:45.242052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:45.283718   65164 cri.go:89] found id: ""
	I0920 22:24:45.283743   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.283752   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:45.283757   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:45.283812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:45.331523   65164 cri.go:89] found id: ""
	I0920 22:24:45.331564   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.331576   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:45.331585   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:45.331672   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:45.372428   65164 cri.go:89] found id: ""
	I0920 22:24:45.372459   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.372469   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:45.372479   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:45.372496   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:45.387751   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:45.387786   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:45.470354   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:45.470377   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:45.470392   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:45.557865   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:45.557902   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:45.600123   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:45.600153   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:45.278256   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:47.278977   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:48.153277   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:48.167796   65164 kubeadm.go:597] duration metric: took 4m4.375477562s to restartPrimaryControlPlane
	W0920 22:24:48.167875   65164 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:24:48.167906   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:24:48.626343   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:24:48.644488   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:24:48.655891   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:24:48.666885   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:24:48.666910   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:24:48.666985   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:24:48.676979   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:24:48.677048   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:24:48.687890   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:24:48.697890   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:24:48.697955   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:24:48.708412   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.719024   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:24:48.719093   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.729321   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:24:48.739258   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:24:48.739329   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:24:48.749787   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:24:48.825173   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:24:48.825258   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:24:48.982428   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:24:48.982587   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:24:48.982721   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:24:49.194912   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:24:49.196899   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:24:49.196999   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:24:49.197078   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:24:49.197222   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:24:49.197328   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:24:49.197421   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:24:49.197495   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:24:49.197574   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:24:49.197662   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:24:49.197761   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:24:49.197887   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:24:49.197948   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:24:49.198036   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:24:49.468998   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:24:49.613979   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:24:49.868901   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:24:50.071143   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:24:50.093179   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:24:50.094030   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:24:50.094086   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:24:50.232993   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:24:50.234761   65164 out.go:235]   - Booting up control plane ...
	I0920 22:24:50.234871   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:24:50.247358   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:24:50.248714   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:24:50.249690   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:24:50.252874   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:24:49.778284   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:52.277872   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:54.278087   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:56.278439   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:58.777573   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:00.777673   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:02.777879   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.503130   64930 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.143868713s)
	I0920 22:25:07.503224   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:07.519558   64930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:25:07.530384   64930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:25:07.540441   64930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:25:07.540469   64930 kubeadm.go:157] found existing configuration files:
	
	I0920 22:25:07.540521   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 22:25:07.550627   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:25:07.550694   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:25:07.560193   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 22:25:07.569908   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:25:07.569966   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:25:07.580326   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 22:25:07.592893   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:25:07.592969   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:25:07.603183   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 22:25:07.613494   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:25:07.613558   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:25:07.623271   64930 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:25:07.671307   64930 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:25:07.671611   64930 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:25:07.801578   64930 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:25:07.801721   64930 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:25:07.801867   64930 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:25:07.811576   64930 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:25:04.778293   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.278844   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.814524   64930 out.go:235]   - Generating certificates and keys ...
	I0920 22:25:07.814630   64930 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:25:07.814714   64930 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:25:07.814828   64930 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:25:07.814911   64930 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:25:07.815048   64930 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:25:07.815155   64930 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:25:07.815261   64930 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:25:07.815374   64930 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:25:07.815473   64930 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:25:07.815580   64930 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:25:07.815640   64930 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:25:07.815728   64930 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:25:08.129976   64930 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:25:08.305286   64930 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:25:08.489838   64930 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:25:08.656531   64930 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:25:08.909653   64930 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:25:08.910197   64930 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:25:08.913016   64930 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:25:08.914949   64930 out.go:235]   - Booting up control plane ...
	I0920 22:25:08.915080   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:25:08.915160   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:25:08.915571   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:25:08.935196   64930 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:25:08.941127   64930 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:25:08.941188   64930 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:25:09.077238   64930 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:25:09.077420   64930 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:25:09.580067   64930 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.879816ms
	I0920 22:25:09.580155   64930 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:25:09.779112   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:11.779164   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:14.582062   64930 kubeadm.go:310] [api-check] The API server is healthy after 5.001748542s
	I0920 22:25:14.600094   64930 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:25:14.622724   64930 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:25:14.657734   64930 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:25:14.657985   64930 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-799783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:25:14.673021   64930 kubeadm.go:310] [bootstrap-token] Using token: xygl58.yjxbp1o582mz9yse
	I0920 22:25:14.675015   64930 out.go:235]   - Configuring RBAC rules ...
	I0920 22:25:14.675149   64930 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:25:14.682527   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:25:14.696372   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:25:14.700087   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:25:14.703910   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:25:14.707739   64930 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:25:14.991815   64930 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:25:15.451193   64930 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:25:15.990452   64930 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:25:15.990476   64930 kubeadm.go:310] 
	I0920 22:25:15.990551   64930 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:25:15.990561   64930 kubeadm.go:310] 
	I0920 22:25:15.990692   64930 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:25:15.990720   64930 kubeadm.go:310] 
	I0920 22:25:15.990765   64930 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:25:15.990873   64930 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:25:15.990983   64930 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:25:15.990998   64930 kubeadm.go:310] 
	I0920 22:25:15.991082   64930 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:25:15.991091   64930 kubeadm.go:310] 
	I0920 22:25:15.991157   64930 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:25:15.991169   64930 kubeadm.go:310] 
	I0920 22:25:15.991248   64930 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:25:15.991352   64930 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:25:15.991458   64930 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:25:15.991469   64930 kubeadm.go:310] 
	I0920 22:25:15.991587   64930 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:25:15.991695   64930 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:25:15.991705   64930 kubeadm.go:310] 
	I0920 22:25:15.991828   64930 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token xygl58.yjxbp1o582mz9yse \
	I0920 22:25:15.991969   64930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 22:25:15.991996   64930 kubeadm.go:310] 	--control-plane 
	I0920 22:25:15.992004   64930 kubeadm.go:310] 
	I0920 22:25:15.992116   64930 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:25:15.992128   64930 kubeadm.go:310] 
	I0920 22:25:15.992237   64930 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token xygl58.yjxbp1o582mz9yse \
	I0920 22:25:15.992374   64930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 22:25:15.993135   64930 kubeadm.go:310] W0920 22:25:07.638229    2519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:25:15.993525   64930 kubeadm.go:310] W0920 22:25:07.639053    2519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:25:15.993662   64930 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:25:15.993697   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:25:15.993707   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:25:15.995878   64930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:25:13.779628   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:16.279390   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:15.997224   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:25:16.009209   64930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:25:16.056563   64930 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:25:16.056636   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:16.056650   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-799783 minikube.k8s.io/updated_at=2024_09_20T22_25_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=default-k8s-diff-port-799783 minikube.k8s.io/primary=true
	I0920 22:25:16.306668   64930 ops.go:34] apiserver oom_adj: -16
	I0920 22:25:16.306700   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:16.807170   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:17.306777   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:17.807844   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:18.307336   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:18.807397   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:19.307726   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:19.807086   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:20.306789   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:20.440925   64930 kubeadm.go:1113] duration metric: took 4.384353815s to wait for elevateKubeSystemPrivileges
	I0920 22:25:20.440962   64930 kubeadm.go:394] duration metric: took 4m59.125193253s to StartCluster
	I0920 22:25:20.440983   64930 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:25:20.441088   64930 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:25:20.442725   64930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:25:20.443037   64930 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:25:20.443113   64930 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:25:20.443191   64930 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443209   64930 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-799783"
	W0920 22:25:20.443217   64930 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:25:20.443243   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.443268   64930 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:25:20.443324   64930 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443344   64930 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-799783"
	I0920 22:25:20.443654   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.443690   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.443738   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.443746   64930 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443766   64930 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-799783"
	I0920 22:25:20.443776   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0920 22:25:20.443784   64930 addons.go:243] addon metrics-server should already be in state true
	I0920 22:25:20.443814   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.444185   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.444234   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.445296   64930 out.go:177] * Verifying Kubernetes components...
	I0920 22:25:20.447251   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:25:20.460387   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36667
	I0920 22:25:20.460405   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40197
	I0920 22:25:20.460831   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.460845   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.461346   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.461369   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.461486   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.461511   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.461774   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.461877   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.462078   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.462361   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.462402   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.463069   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0920 22:25:20.463410   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.463880   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.463900   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.465938   64930 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-799783"
	W0920 22:25:20.465969   64930 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:25:20.465998   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.466384   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.466422   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.467044   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.467706   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.467742   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.482478   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I0920 22:25:20.483079   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.483326   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0920 22:25:20.483597   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.483621   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.483758   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.483921   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.484249   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.484269   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.484562   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.484633   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.484683   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.484899   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.487182   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.489324   64930 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:25:20.489534   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38485
	I0920 22:25:20.489908   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.490596   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.490619   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.490892   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.491065   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.491195   64930 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:25:20.491216   64930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:25:20.491235   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.493798   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.494931   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.495325   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.495346   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.495482   64930 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:25:20.495649   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.495816   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.495967   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.496091   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.496876   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:25:20.496899   64930 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:25:20.496919   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.500143   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.500691   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.500715   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.500900   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.501086   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.501243   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.501378   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.502515   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34479
	I0920 22:25:20.502834   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.503415   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.503435   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.503858   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.504115   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.506006   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.506217   64930 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:25:20.506234   64930 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:25:20.506253   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.509565   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.510128   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.510175   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.510373   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.510558   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.510702   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.510837   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.644780   64930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:25:20.662529   64930 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-799783" to be "Ready" ...
	I0920 22:25:20.717900   64930 node_ready.go:49] node "default-k8s-diff-port-799783" has status "Ready":"True"
	I0920 22:25:20.717930   64930 node_ready.go:38] duration metric: took 55.371957ms for node "default-k8s-diff-port-799783" to be "Ready" ...
	I0920 22:25:20.717959   64930 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:20.751849   64930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:20.793517   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:25:20.827184   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:25:20.827210   64930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:25:20.830914   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:25:20.882548   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:25:20.882577   64930 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:25:20.942208   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:25:20.942235   64930 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:25:20.998935   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:25:21.567298   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567313   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567333   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567338   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567602   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.567648   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.567677   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.567694   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567705   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567921   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.567947   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.567952   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569272   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.569294   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569344   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.569361   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.569606   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.569641   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569641   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.589899   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.589919   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.590252   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.590267   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.590273   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.842284   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.842313   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.842673   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.842674   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.842703   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.842716   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.842726   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.843052   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.843086   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.843100   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.843578   64930 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-799783"
	I0920 22:25:21.847202   64930 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 22:25:18.778824   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:20.779763   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:23.279489   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:21.849188   64930 addons.go:510] duration metric: took 1.406076182s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 22:25:22.761019   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:23.758480   64930 pod_ready.go:93] pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:23.758502   64930 pod_ready.go:82] duration metric: took 3.006618308s for pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:23.758514   64930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:25.776826   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:27.777926   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:25.766257   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:28.265538   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:29.784175   64930 pod_ready.go:93] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.784202   64930 pod_ready.go:82] duration metric: took 6.02568215s for pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.784213   64930 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.790351   64930 pod_ready.go:93] pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.790375   64930 pod_ready.go:82] duration metric: took 6.155652ms for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.790383   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.796400   64930 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.796423   64930 pod_ready.go:82] duration metric: took 6.034163ms for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.796432   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.801470   64930 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.801495   64930 pod_ready.go:82] duration metric: took 5.056014ms for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.801504   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h7j48" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.806664   64930 pod_ready.go:93] pod "kube-proxy-h7j48" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.806683   64930 pod_ready.go:82] duration metric: took 5.17339ms for pod "kube-proxy-h7j48" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.806691   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:30.163945   64930 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:30.163973   64930 pod_ready.go:82] duration metric: took 357.276064ms for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:30.163981   64930 pod_ready.go:39] duration metric: took 9.446007204s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:30.163995   64930 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:25:30.164067   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:25:30.180030   64930 api_server.go:72] duration metric: took 9.736951732s to wait for apiserver process to appear ...
	I0920 22:25:30.180063   64930 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:25:30.180084   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:25:30.184582   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 200:
	ok
	I0920 22:25:30.185462   64930 api_server.go:141] control plane version: v1.31.1
	I0920 22:25:30.185482   64930 api_server.go:131] duration metric: took 5.41308ms to wait for apiserver health ...
	I0920 22:25:30.185490   64930 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:25:30.254069   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:25:30.255250   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:30.255483   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:30.366383   64930 system_pods.go:59] 9 kube-system pods found
	I0920 22:25:30.366415   64930 system_pods.go:61] "coredns-7c65d6cfc9-dpwmm" [6bd8b8dc-b176-4d2a-affc-63e02eb9fba3] Running
	I0920 22:25:30.366421   64930 system_pods.go:61] "coredns-7c65d6cfc9-lgvzf" [58852f70-36e8-4972-8a6f-b04d8fc9d9f3] Running
	I0920 22:25:30.366425   64930 system_pods.go:61] "etcd-default-k8s-diff-port-799783" [50316bfb-5f45-4685-a331-896f9ecbf691] Running
	I0920 22:25:30.366429   64930 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799783" [f5148101-179c-45e3-a7fc-b4b8a95c8b13] Running
	I0920 22:25:30.366433   64930 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799783" [c4c595bb-40bd-4ad6-b121-43f539fc9d76] Running
	I0920 22:25:30.366436   64930 system_pods.go:61] "kube-proxy-h7j48" [0fe6b074-a959-4f74-a981-e2303126c3ab] Running
	I0920 22:25:30.366439   64930 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799783" [369b18a4-6c12-403a-889d-4365264a29bf] Running
	I0920 22:25:30.366444   64930 system_pods.go:61] "metrics-server-6867b74b74-pjcvw" [0b1b708c-664e-4d6c-8da2-79941793d8d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:25:30.366448   64930 system_pods.go:61] "storage-provisioner" [6149617c-dd2c-4ac4-ad1b-80f4943d966f] Running
	I0920 22:25:30.366455   64930 system_pods.go:74] duration metric: took 180.959929ms to wait for pod list to return data ...
	I0920 22:25:30.366462   64930 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:25:30.563810   64930 default_sa.go:45] found service account: "default"
	I0920 22:25:30.563838   64930 default_sa.go:55] duration metric: took 197.369932ms for default service account to be created ...
	I0920 22:25:30.563848   64930 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:25:30.767257   64930 system_pods.go:86] 9 kube-system pods found
	I0920 22:25:30.767285   64930 system_pods.go:89] "coredns-7c65d6cfc9-dpwmm" [6bd8b8dc-b176-4d2a-affc-63e02eb9fba3] Running
	I0920 22:25:30.767291   64930 system_pods.go:89] "coredns-7c65d6cfc9-lgvzf" [58852f70-36e8-4972-8a6f-b04d8fc9d9f3] Running
	I0920 22:25:30.767295   64930 system_pods.go:89] "etcd-default-k8s-diff-port-799783" [50316bfb-5f45-4685-a331-896f9ecbf691] Running
	I0920 22:25:30.767309   64930 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-799783" [f5148101-179c-45e3-a7fc-b4b8a95c8b13] Running
	I0920 22:25:30.767313   64930 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-799783" [c4c595bb-40bd-4ad6-b121-43f539fc9d76] Running
	I0920 22:25:30.767318   64930 system_pods.go:89] "kube-proxy-h7j48" [0fe6b074-a959-4f74-a981-e2303126c3ab] Running
	I0920 22:25:30.767323   64930 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-799783" [369b18a4-6c12-403a-889d-4365264a29bf] Running
	I0920 22:25:30.767331   64930 system_pods.go:89] "metrics-server-6867b74b74-pjcvw" [0b1b708c-664e-4d6c-8da2-79941793d8d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:25:30.767337   64930 system_pods.go:89] "storage-provisioner" [6149617c-dd2c-4ac4-ad1b-80f4943d966f] Running
	I0920 22:25:30.767348   64930 system_pods.go:126] duration metric: took 203.493638ms to wait for k8s-apps to be running ...
	I0920 22:25:30.767362   64930 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:25:30.767405   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:30.784553   64930 system_svc.go:56] duration metric: took 17.183184ms WaitForService to wait for kubelet
	I0920 22:25:30.784583   64930 kubeadm.go:582] duration metric: took 10.341510649s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:25:30.784612   64930 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:25:30.963140   64930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:25:30.963166   64930 node_conditions.go:123] node cpu capacity is 2
	I0920 22:25:30.963177   64930 node_conditions.go:105] duration metric: took 178.56013ms to run NodePressure ...
	I0920 22:25:30.963187   64930 start.go:241] waiting for startup goroutines ...
	I0920 22:25:30.963194   64930 start.go:246] waiting for cluster config update ...
	I0920 22:25:30.963204   64930 start.go:255] writing updated cluster config ...
	I0920 22:25:30.963504   64930 ssh_runner.go:195] Run: rm -f paused
	I0920 22:25:31.013117   64930 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:25:31.015183   64930 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-799783" cluster and "default" namespace by default
	I0920 22:25:29.779849   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:31.772018   64076 pod_ready.go:82] duration metric: took 4m0.000779939s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" ...
	E0920 22:25:31.772048   64076 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 22:25:31.772065   64076 pod_ready.go:39] duration metric: took 4m8.986742832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:31.772089   64076 kubeadm.go:597] duration metric: took 4m16.808660952s to restartPrimaryControlPlane
	W0920 22:25:31.772139   64076 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:25:31.772162   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:25:35.255954   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:35.256195   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:45.256746   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:45.257006   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:58.118910   64076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.346718333s)
	I0920 22:25:58.118998   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:58.138485   64076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:25:58.150312   64076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:25:58.163089   64076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:25:58.163114   64076 kubeadm.go:157] found existing configuration files:
	
	I0920 22:25:58.163165   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:25:58.173792   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:25:58.173847   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:25:58.185186   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:25:58.195835   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:25:58.195893   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:25:58.206121   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:25:58.216016   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:25:58.216076   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:25:58.226242   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:25:58.235841   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:25:58.235907   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:25:58.245684   64076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:25:58.295781   64076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:25:58.295913   64076 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:25:58.411688   64076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:25:58.411802   64076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:25:58.411912   64076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:25:58.421924   64076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:25:58.424166   64076 out.go:235]   - Generating certificates and keys ...
	I0920 22:25:58.424264   64076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:25:58.424355   64076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:25:58.424474   64076 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:25:58.424562   64076 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:25:58.424667   64076 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:25:58.424735   64076 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:25:58.424804   64076 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:25:58.424874   64076 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:25:58.424955   64076 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:25:58.425072   64076 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:25:58.425145   64076 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:25:58.425227   64076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:25:58.541762   64076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:25:58.784016   64076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:25:58.942020   64076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:25:59.024750   64076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:25:59.348415   64076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:25:59.348852   64076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:25:59.351325   64076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:25:59.353421   64076 out.go:235]   - Booting up control plane ...
	I0920 22:25:59.353551   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:25:59.353681   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:25:59.354049   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:25:59.373482   64076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:25:59.383527   64076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:25:59.383614   64076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:25:59.518812   64076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:25:59.519013   64076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:26:00.520303   64076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001563044s
	I0920 22:26:00.520408   64076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:26:05.522834   64076 kubeadm.go:310] [api-check] The API server is healthy after 5.002467309s
	I0920 22:26:05.536106   64076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:26:05.552715   64076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:26:05.578398   64076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:26:05.578647   64076 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-032884 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:26:05.590816   64076 kubeadm.go:310] [bootstrap-token] Using token: 3j77z9.7tnukhkmyvtf2f59
	I0920 22:26:05.258121   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:05.258367   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:05.592300   64076 out.go:235]   - Configuring RBAC rules ...
	I0920 22:26:05.592451   64076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:26:05.600458   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:26:05.608494   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:26:05.612079   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:26:05.617101   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:26:05.627400   64076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:26:05.930323   64076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:26:06.388375   64076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:26:06.934433   64076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:26:06.934459   64076 kubeadm.go:310] 
	I0920 22:26:06.934541   64076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:26:06.934552   64076 kubeadm.go:310] 
	I0920 22:26:06.934644   64076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:26:06.934655   64076 kubeadm.go:310] 
	I0920 22:26:06.934694   64076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:26:06.934824   64076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:26:06.934916   64076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:26:06.934950   64076 kubeadm.go:310] 
	I0920 22:26:06.935050   64076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:26:06.935067   64076 kubeadm.go:310] 
	I0920 22:26:06.935125   64076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:26:06.935136   64076 kubeadm.go:310] 
	I0920 22:26:06.935222   64076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:26:06.935325   64076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:26:06.935429   64076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:26:06.935438   64076 kubeadm.go:310] 
	I0920 22:26:06.935567   64076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:26:06.935719   64076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:26:06.935734   64076 kubeadm.go:310] 
	I0920 22:26:06.935841   64076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3j77z9.7tnukhkmyvtf2f59 \
	I0920 22:26:06.935978   64076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 22:26:06.936005   64076 kubeadm.go:310] 	--control-plane 
	I0920 22:26:06.936013   64076 kubeadm.go:310] 
	I0920 22:26:06.936116   64076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:26:06.936130   64076 kubeadm.go:310] 
	I0920 22:26:06.936241   64076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3j77z9.7tnukhkmyvtf2f59 \
	I0920 22:26:06.936376   64076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 22:26:06.937407   64076 kubeadm.go:310] W0920 22:25:58.249567    2984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:26:06.937837   64076 kubeadm.go:310] W0920 22:25:58.250340    2984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:26:06.937982   64076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:26:06.938009   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:26:06.938021   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:26:06.939944   64076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:26:06.941272   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:26:06.952238   64076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:26:06.973391   64076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:26:06.973493   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:06.973530   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-032884 minikube.k8s.io/updated_at=2024_09_20T22_26_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=no-preload-032884 minikube.k8s.io/primary=true
	I0920 22:26:07.020571   64076 ops.go:34] apiserver oom_adj: -16
	I0920 22:26:07.194238   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:07.694543   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:08.194516   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:08.694927   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:09.195240   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:09.694399   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.194851   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.695301   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.816261   64076 kubeadm.go:1113] duration metric: took 3.842834289s to wait for elevateKubeSystemPrivileges
	I0920 22:26:10.816307   64076 kubeadm.go:394] duration metric: took 4m55.906657261s to StartCluster
	I0920 22:26:10.816334   64076 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:26:10.816423   64076 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:26:10.818137   64076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:26:10.818430   64076 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:26:10.818544   64076 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:26:10.818635   64076 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:26:10.818650   64076 addons.go:69] Setting storage-provisioner=true in profile "no-preload-032884"
	I0920 22:26:10.818673   64076 addons.go:234] Setting addon storage-provisioner=true in "no-preload-032884"
	I0920 22:26:10.818683   64076 addons.go:69] Setting default-storageclass=true in profile "no-preload-032884"
	W0920 22:26:10.818695   64076 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:26:10.818695   64076 addons.go:69] Setting metrics-server=true in profile "no-preload-032884"
	I0920 22:26:10.818709   64076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-032884"
	I0920 22:26:10.818731   64076 addons.go:234] Setting addon metrics-server=true in "no-preload-032884"
	I0920 22:26:10.818739   64076 host.go:66] Checking if "no-preload-032884" exists ...
	W0920 22:26:10.818744   64076 addons.go:243] addon metrics-server should already be in state true
	I0920 22:26:10.818769   64076 host.go:66] Checking if "no-preload-032884" exists ...
	I0920 22:26:10.819164   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819180   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819179   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819213   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.819222   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.819331   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.820951   64076 out.go:177] * Verifying Kubernetes components...
	I0920 22:26:10.822713   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:26:10.836801   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0920 22:26:10.837338   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.837962   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.837987   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.838418   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.839070   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.839116   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.840425   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0920 22:26:10.840549   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34443
	I0920 22:26:10.840871   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.840930   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.841366   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.841392   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.841486   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.841510   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.841775   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.841840   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.841983   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.842400   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.842439   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.845507   64076 addons.go:234] Setting addon default-storageclass=true in "no-preload-032884"
	W0920 22:26:10.845528   64076 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:26:10.845554   64076 host.go:66] Checking if "no-preload-032884" exists ...
	I0920 22:26:10.845888   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.845930   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.856667   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I0920 22:26:10.857183   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.857701   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.857717   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.858050   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.858233   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.858557   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38281
	I0920 22:26:10.858951   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.859570   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.859590   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.859931   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.860098   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.860214   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.861837   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.862616   64076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:26:10.863540   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0920 22:26:10.863601   64076 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:26:10.863918   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.864542   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.864562   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.864714   64076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:26:10.864732   64076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:26:10.864750   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.864852   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.865319   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.865348   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.865572   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:26:10.865590   64076 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:26:10.865611   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.868063   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868471   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.868493   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868500   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868687   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.868968   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.868990   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.869012   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.869160   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.869216   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.869298   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:10.869356   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.869480   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.869584   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:10.908239   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0920 22:26:10.910305   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.910864   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.910889   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.911286   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.911498   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.913134   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.913353   64076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:26:10.913369   64076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:26:10.913385   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.916151   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.916720   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.916751   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.916974   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.917122   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.917235   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.917322   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:11.062110   64076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:26:11.091616   64076 node_ready.go:35] waiting up to 6m0s for node "no-preload-032884" to be "Ready" ...
	I0920 22:26:11.109422   64076 node_ready.go:49] node "no-preload-032884" has status "Ready":"True"
	I0920 22:26:11.109446   64076 node_ready.go:38] duration metric: took 17.79737ms for node "no-preload-032884" to be "Ready" ...
	I0920 22:26:11.109455   64076 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:26:11.117996   64076 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:11.162435   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:26:11.251093   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:26:11.251120   64076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:26:11.267736   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:26:11.300802   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:26:11.300828   64076 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:26:11.417756   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:26:11.417784   64076 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:26:11.467144   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:26:11.490750   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.490776   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.491084   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.491093   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:11.491103   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.491127   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.491142   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.492893   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:11.492899   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.492913   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.526730   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.526758   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.527068   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.527113   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.527142   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.029394   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.029419   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.029728   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.029734   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.029753   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.029780   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.029788   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.030049   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.030060   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.725440   64076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.258247375s)
	I0920 22:26:12.725497   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.725509   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.725835   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.725896   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.725909   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.725929   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.725937   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.726239   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.726256   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.726268   64076 addons.go:475] Verifying addon metrics-server=true in "no-preload-032884"
	I0920 22:26:12.728264   64076 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 22:26:12.729421   64076 addons.go:510] duration metric: took 1.910882335s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 22:26:13.131514   64076 pod_ready.go:103] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:15.624357   64076 pod_ready.go:103] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:16.624382   64076 pod_ready.go:93] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:16.624410   64076 pod_ready.go:82] duration metric: took 5.506388589s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:16.624419   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:18.631927   64076 pod_ready.go:103] pod "kube-apiserver-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:19.132302   64076 pod_ready.go:93] pod "kube-apiserver-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.132326   64076 pod_ready.go:82] duration metric: took 2.507900074s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.132336   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.137816   64076 pod_ready.go:93] pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.137841   64076 pod_ready.go:82] duration metric: took 5.497911ms for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.137853   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fdc7z" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.143195   64076 pod_ready.go:93] pod "kube-proxy-fdc7z" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.143225   64076 pod_ready.go:82] duration metric: took 5.363183ms for pod "kube-proxy-fdc7z" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.143236   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.148171   64076 pod_ready.go:93] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.148193   64076 pod_ready.go:82] duration metric: took 4.948914ms for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.148200   64076 pod_ready.go:39] duration metric: took 8.038734304s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:26:19.148214   64076 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:26:19.148259   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:26:19.167764   64076 api_server.go:72] duration metric: took 8.349299959s to wait for apiserver process to appear ...
	I0920 22:26:19.167789   64076 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:26:19.167809   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:26:19.173104   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 200:
	ok
	I0920 22:26:19.174150   64076 api_server.go:141] control plane version: v1.31.1
	I0920 22:26:19.174171   64076 api_server.go:131] duration metric: took 6.376066ms to wait for apiserver health ...
	I0920 22:26:19.174178   64076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:26:19.181050   64076 system_pods.go:59] 9 kube-system pods found
	I0920 22:26:19.181083   64076 system_pods.go:61] "coredns-7c65d6cfc9-fbxtz" [30e7b041-7437-4dec-bea2-4512c1c233d8] Running
	I0920 22:26:19.181091   64076 system_pods.go:61] "coredns-7c65d6cfc9-jsx2x" [6b4bdbac-ba8e-434b-80b7-6ade0c7cc363] Running
	I0920 22:26:19.181097   64076 system_pods.go:61] "etcd-no-preload-032884" [dd023e30-352b-4578-85b8-27fed6b64110] Running
	I0920 22:26:19.181102   64076 system_pods.go:61] "kube-apiserver-no-preload-032884" [5a644708-3886-4534-b936-b4931e804a04] Running
	I0920 22:26:19.181106   64076 system_pods.go:61] "kube-controller-manager-no-preload-032884" [6c871a06-842c-4588-a16f-2ceb8eef6565] Running
	I0920 22:26:19.181113   64076 system_pods.go:61] "kube-proxy-fdc7z" [7a28610e-aa20-4aed-a9a4-1e125f03445b] Running
	I0920 22:26:19.181117   64076 system_pods.go:61] "kube-scheduler-no-preload-032884" [6d878841-99c2-4b0a-95fa-01fbc185e78c] Running
	I0920 22:26:19.181126   64076 system_pods.go:61] "metrics-server-6867b74b74-xvbg9" [33c2a642-1336-4e1c-98cf-a1941710c39f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:26:19.181131   64076 system_pods.go:61] "storage-provisioner" [18366542-5571-4ff9-885b-f3aaa3106905] Running
	I0920 22:26:19.181143   64076 system_pods.go:74] duration metric: took 6.958801ms to wait for pod list to return data ...
	I0920 22:26:19.181152   64076 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:26:19.185152   64076 default_sa.go:45] found service account: "default"
	I0920 22:26:19.185178   64076 default_sa.go:55] duration metric: took 4.016584ms for default service account to be created ...
	I0920 22:26:19.185188   64076 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:26:19.333687   64076 system_pods.go:86] 9 kube-system pods found
	I0920 22:26:19.333721   64076 system_pods.go:89] "coredns-7c65d6cfc9-fbxtz" [30e7b041-7437-4dec-bea2-4512c1c233d8] Running
	I0920 22:26:19.333729   64076 system_pods.go:89] "coredns-7c65d6cfc9-jsx2x" [6b4bdbac-ba8e-434b-80b7-6ade0c7cc363] Running
	I0920 22:26:19.333734   64076 system_pods.go:89] "etcd-no-preload-032884" [dd023e30-352b-4578-85b8-27fed6b64110] Running
	I0920 22:26:19.333740   64076 system_pods.go:89] "kube-apiserver-no-preload-032884" [5a644708-3886-4534-b936-b4931e804a04] Running
	I0920 22:26:19.333748   64076 system_pods.go:89] "kube-controller-manager-no-preload-032884" [6c871a06-842c-4588-a16f-2ceb8eef6565] Running
	I0920 22:26:19.333753   64076 system_pods.go:89] "kube-proxy-fdc7z" [7a28610e-aa20-4aed-a9a4-1e125f03445b] Running
	I0920 22:26:19.333759   64076 system_pods.go:89] "kube-scheduler-no-preload-032884" [6d878841-99c2-4b0a-95fa-01fbc185e78c] Running
	I0920 22:26:19.333768   64076 system_pods.go:89] "metrics-server-6867b74b74-xvbg9" [33c2a642-1336-4e1c-98cf-a1941710c39f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:26:19.333778   64076 system_pods.go:89] "storage-provisioner" [18366542-5571-4ff9-885b-f3aaa3106905] Running
	I0920 22:26:19.333792   64076 system_pods.go:126] duration metric: took 148.596279ms to wait for k8s-apps to be running ...
	I0920 22:26:19.333804   64076 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:26:19.333859   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:26:19.352068   64076 system_svc.go:56] duration metric: took 18.255575ms WaitForService to wait for kubelet
	I0920 22:26:19.352101   64076 kubeadm.go:582] duration metric: took 8.533641688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:26:19.352120   64076 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:26:19.529468   64076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:26:19.529499   64076 node_conditions.go:123] node cpu capacity is 2
	I0920 22:26:19.529510   64076 node_conditions.go:105] duration metric: took 177.384986ms to run NodePressure ...
	I0920 22:26:19.529521   64076 start.go:241] waiting for startup goroutines ...
	I0920 22:26:19.529527   64076 start.go:246] waiting for cluster config update ...
	I0920 22:26:19.529536   64076 start.go:255] writing updated cluster config ...
	I0920 22:26:19.529794   64076 ssh_runner.go:195] Run: rm -f paused
	I0920 22:26:19.578797   64076 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:26:19.581150   64076 out.go:177] * Done! kubectl is now configured to use "no-preload-032884" cluster and "default" namespace by default
	I0920 22:26:45.260713   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:45.260972   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:45.260987   65164 kubeadm.go:310] 
	I0920 22:26:45.261036   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:26:45.261079   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:26:45.261089   65164 kubeadm.go:310] 
	I0920 22:26:45.261139   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:26:45.261186   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:26:45.261335   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:26:45.261345   65164 kubeadm.go:310] 
	I0920 22:26:45.261491   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:26:45.261525   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:26:45.261554   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:26:45.261560   65164 kubeadm.go:310] 
	I0920 22:26:45.261648   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:26:45.261717   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:26:45.261723   65164 kubeadm.go:310] 
	I0920 22:26:45.261812   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:26:45.261917   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:26:45.262021   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:26:45.262113   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:26:45.262126   65164 kubeadm.go:310] 
	I0920 22:26:45.263170   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:26:45.263288   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:26:45.263381   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 22:26:45.263497   65164 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 22:26:45.263544   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:26:50.654066   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.390496163s)
	I0920 22:26:50.654143   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:26:50.670366   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:26:50.682320   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:26:50.682340   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:26:50.682395   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:26:50.692427   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:26:50.692501   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:26:50.703008   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:26:50.713027   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:26:50.713090   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:26:50.724408   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.733746   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:26:50.733805   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.744669   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:26:50.753996   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:26:50.754061   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:26:50.764484   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:26:50.836541   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:26:50.836606   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:26:50.979844   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:26:50.979987   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:26:50.980117   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:26:51.172382   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:26:51.174186   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:26:51.174313   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:26:51.174410   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:26:51.174550   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:26:51.174640   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:26:51.174768   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:26:51.174855   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:26:51.175252   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:26:51.175637   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:26:51.176082   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:26:51.176400   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:26:51.176484   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:26:51.176562   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:26:51.391581   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:26:51.474902   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:26:51.620828   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:26:51.838880   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:26:51.856375   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:26:51.857522   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:26:51.857615   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:26:52.014694   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:26:52.016872   65164 out.go:235]   - Booting up control plane ...
	I0920 22:26:52.017002   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:26:52.019915   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:26:52.021285   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:26:52.022532   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:26:52.026230   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:27:32.029658   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:27:32.029764   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:32.030051   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:37.030416   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:37.030634   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:47.031397   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:47.031644   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:07.031668   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:07.031918   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030492   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:47.030797   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030821   65164 kubeadm.go:310] 
	I0920 22:28:47.030874   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:28:47.030912   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:28:47.030919   65164 kubeadm.go:310] 
	I0920 22:28:47.030982   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:28:47.031046   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:28:47.031204   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:28:47.031225   65164 kubeadm.go:310] 
	I0920 22:28:47.031371   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:28:47.031425   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:28:47.031462   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:28:47.031466   65164 kubeadm.go:310] 
	I0920 22:28:47.031605   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:28:47.031727   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:28:47.031739   65164 kubeadm.go:310] 
	I0920 22:28:47.031880   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:28:47.032003   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:28:47.032102   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:28:47.032202   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:28:47.032212   65164 kubeadm.go:310] 
	I0920 22:28:47.033045   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:28:47.033157   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:28:47.033247   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 22:28:47.033315   65164 kubeadm.go:394] duration metric: took 8m3.299765846s to StartCluster
	I0920 22:28:47.033360   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:28:47.033452   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:28:47.081201   65164 cri.go:89] found id: ""
	I0920 22:28:47.081225   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.081233   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:28:47.081239   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:28:47.081289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:28:47.119048   65164 cri.go:89] found id: ""
	I0920 22:28:47.119081   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.119089   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:28:47.119094   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:28:47.119140   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:28:47.164690   65164 cri.go:89] found id: ""
	I0920 22:28:47.164719   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.164730   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:28:47.164738   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:28:47.164806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:28:47.201003   65164 cri.go:89] found id: ""
	I0920 22:28:47.201035   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.201046   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:28:47.201053   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:28:47.201114   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:28:47.238478   65164 cri.go:89] found id: ""
	I0920 22:28:47.238510   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.238522   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:28:47.238530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:28:47.238579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:28:47.274372   65164 cri.go:89] found id: ""
	I0920 22:28:47.274406   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.274415   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:28:47.274422   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:28:47.274487   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:28:47.310155   65164 cri.go:89] found id: ""
	I0920 22:28:47.310181   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.310191   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:28:47.310197   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:28:47.310243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:28:47.350845   65164 cri.go:89] found id: ""
	I0920 22:28:47.350872   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.350882   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:28:47.350894   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:28:47.350908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:28:47.403879   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:28:47.403915   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:28:47.418989   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:28:47.419021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:28:47.511632   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:28:47.511697   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:28:47.511718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:28:47.635613   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:28:47.635653   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0920 22:28:47.677326   65164 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 22:28:47.677383   65164 out.go:270] * 
	W0920 22:28:47.677436   65164 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.677449   65164 out.go:270] * 
	W0920 22:28:47.678365   65164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 22:28:47.681590   65164 out.go:201] 
	W0920 22:28:47.682658   65164 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.682696   65164 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 22:28:47.682719   65164 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 22:28:47.684042   65164 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.150415835Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb7ee5f6-596c-422b-9071-fedc9b68b5b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.150657162Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52423b7dc42cb47e4e10213716656ff74ac116d4d366261d104f2b943e25563e,PodSandboxId:e3eff4d2c7e833ce99811d19863712dc9218fa5c7da4df113915eac70a30ff0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122783209017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lgvzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58852f70-36e8-4972-8a6f-b04d8fc9d9f3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dde9daf56a3dc9754f580c4c18d7b55c4119f2233bf60e39b0ded71b7445a6f,PodSandboxId:89b8eb6393180386d1068612123c3527aff06837e994a95acac08e206875d436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122744407520,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpwmm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6bd8b8dc-b176-4d2a-affc-63e02eb9fba3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad9d02a56da37f2e49c89945e9bf78fdfacce940fa2696eecd3766aea79dfc9,PodSandboxId:33c77c1e886c9418d62fd884d2087bdab7773df0cb69188a588f1e7b924b119f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1726871121973030444,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6149617c-dd2c-4ac4-ad1b-80f4943d966f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7763d880d6ff2c443b413ce67006c4b428790b605c2701d6a0b4e56ad863a29,PodSandboxId:c9eca2c35d9a24badbe9e21a37be61baaaf51002c2e16c94c7f3312135abee40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726871121226913532,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7j48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe6b074-a959-4f74-a981-e2303126c3ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dedf7017026a82096fb601bd58964a90661461f66edc0d9182c4c565be292a8,PodSandboxId:6cbd7d5eb20b847c4f1bd9ab3ba3d56ce8c6966ff7be4bb52ef63a7338c53fb3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871110057623483,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e058f700d55f306a119fa5fc03f0b47b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129346b80ec6cb39e996b231d3ceab514c2ce605eec152adbb0c64841a85fce3,PodSandboxId:0ebb4eae3e66091366c1a2c7919095a9c11ca345d076250459751ba7d04ed968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871110072516812,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b75c462fb57f6fca71bdbd88dc4e157f8f9b18582ae2acd1027d8c57a55554,PodSandboxId:21f494ed38130dd0a2f327b2bb9b00d922c1d889149bc6f0756b9cea5b13fca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871110052217362,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdc3434602c81ab716edc86ab6b373f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1caaaa9c87352d4dd907694515d0359d7400a29dd5906a2818cf2701de42ffb5,PodSandboxId:b57a3588282b93b88f8b9d3df3776c08df62a86d4f582af445427c6bb37de918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871110000450286,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791c5d07cde00f129109d7f097fa6c8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f7491b6b7f7ebbbbbe0336334918ae99736d6384c7ca8034015683366c74a3,PodSandboxId:ba465b6942cf4804f0c0d206188e77fd77baabf99d6cb17dcdbb0d06649157c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870824133680588,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cb7ee5f6-596c-422b-9071-fedc9b68b5b6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.192157079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d766463a-ba8f-4109-a2a1-127b5e061f5b name=/runtime.v1.RuntimeService/Version
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.192308128Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d766463a-ba8f-4109-a2a1-127b5e061f5b name=/runtime.v1.RuntimeService/Version
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.193745919Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=145eca87-2aa4-446f-971e-2d34e2381d49 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.194180330Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871673194154356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=145eca87-2aa4-446f-971e-2d34e2381d49 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.194827414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1483d4cb-4fe8-4df7-a039-876cf89662f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.194896954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1483d4cb-4fe8-4df7-a039-876cf89662f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.195101657Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52423b7dc42cb47e4e10213716656ff74ac116d4d366261d104f2b943e25563e,PodSandboxId:e3eff4d2c7e833ce99811d19863712dc9218fa5c7da4df113915eac70a30ff0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122783209017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lgvzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58852f70-36e8-4972-8a6f-b04d8fc9d9f3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dde9daf56a3dc9754f580c4c18d7b55c4119f2233bf60e39b0ded71b7445a6f,PodSandboxId:89b8eb6393180386d1068612123c3527aff06837e994a95acac08e206875d436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122744407520,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpwmm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6bd8b8dc-b176-4d2a-affc-63e02eb9fba3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad9d02a56da37f2e49c89945e9bf78fdfacce940fa2696eecd3766aea79dfc9,PodSandboxId:33c77c1e886c9418d62fd884d2087bdab7773df0cb69188a588f1e7b924b119f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1726871121973030444,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6149617c-dd2c-4ac4-ad1b-80f4943d966f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7763d880d6ff2c443b413ce67006c4b428790b605c2701d6a0b4e56ad863a29,PodSandboxId:c9eca2c35d9a24badbe9e21a37be61baaaf51002c2e16c94c7f3312135abee40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726871121226913532,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7j48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe6b074-a959-4f74-a981-e2303126c3ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dedf7017026a82096fb601bd58964a90661461f66edc0d9182c4c565be292a8,PodSandboxId:6cbd7d5eb20b847c4f1bd9ab3ba3d56ce8c6966ff7be4bb52ef63a7338c53fb3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871110057623483,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e058f700d55f306a119fa5fc03f0b47b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129346b80ec6cb39e996b231d3ceab514c2ce605eec152adbb0c64841a85fce3,PodSandboxId:0ebb4eae3e66091366c1a2c7919095a9c11ca345d076250459751ba7d04ed968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871110072516812,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b75c462fb57f6fca71bdbd88dc4e157f8f9b18582ae2acd1027d8c57a55554,PodSandboxId:21f494ed38130dd0a2f327b2bb9b00d922c1d889149bc6f0756b9cea5b13fca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871110052217362,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdc3434602c81ab716edc86ab6b373f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1caaaa9c87352d4dd907694515d0359d7400a29dd5906a2818cf2701de42ffb5,PodSandboxId:b57a3588282b93b88f8b9d3df3776c08df62a86d4f582af445427c6bb37de918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871110000450286,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791c5d07cde00f129109d7f097fa6c8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f7491b6b7f7ebbbbbe0336334918ae99736d6384c7ca8034015683366c74a3,PodSandboxId:ba465b6942cf4804f0c0d206188e77fd77baabf99d6cb17dcdbb0d06649157c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870824133680588,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1483d4cb-4fe8-4df7-a039-876cf89662f5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.237858089Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=38668e08-a5f4-45b8-90c8-b8b49b093369 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.237951432Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=38668e08-a5f4-45b8-90c8-b8b49b093369 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.239132363Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d5d6683a-6e21-4470-96b7-faa8a14f75b8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.239830223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871673239803105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d5d6683a-6e21-4470-96b7-faa8a14f75b8 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.240652473Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b0d8b84-d1e3-403a-beea-9b27e59d8ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.240722693Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b0d8b84-d1e3-403a-beea-9b27e59d8ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.240925742Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52423b7dc42cb47e4e10213716656ff74ac116d4d366261d104f2b943e25563e,PodSandboxId:e3eff4d2c7e833ce99811d19863712dc9218fa5c7da4df113915eac70a30ff0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122783209017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lgvzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58852f70-36e8-4972-8a6f-b04d8fc9d9f3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dde9daf56a3dc9754f580c4c18d7b55c4119f2233bf60e39b0ded71b7445a6f,PodSandboxId:89b8eb6393180386d1068612123c3527aff06837e994a95acac08e206875d436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122744407520,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpwmm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6bd8b8dc-b176-4d2a-affc-63e02eb9fba3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad9d02a56da37f2e49c89945e9bf78fdfacce940fa2696eecd3766aea79dfc9,PodSandboxId:33c77c1e886c9418d62fd884d2087bdab7773df0cb69188a588f1e7b924b119f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1726871121973030444,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6149617c-dd2c-4ac4-ad1b-80f4943d966f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7763d880d6ff2c443b413ce67006c4b428790b605c2701d6a0b4e56ad863a29,PodSandboxId:c9eca2c35d9a24badbe9e21a37be61baaaf51002c2e16c94c7f3312135abee40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726871121226913532,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7j48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe6b074-a959-4f74-a981-e2303126c3ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dedf7017026a82096fb601bd58964a90661461f66edc0d9182c4c565be292a8,PodSandboxId:6cbd7d5eb20b847c4f1bd9ab3ba3d56ce8c6966ff7be4bb52ef63a7338c53fb3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871110057623483,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e058f700d55f306a119fa5fc03f0b47b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129346b80ec6cb39e996b231d3ceab514c2ce605eec152adbb0c64841a85fce3,PodSandboxId:0ebb4eae3e66091366c1a2c7919095a9c11ca345d076250459751ba7d04ed968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871110072516812,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b75c462fb57f6fca71bdbd88dc4e157f8f9b18582ae2acd1027d8c57a55554,PodSandboxId:21f494ed38130dd0a2f327b2bb9b00d922c1d889149bc6f0756b9cea5b13fca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871110052217362,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdc3434602c81ab716edc86ab6b373f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1caaaa9c87352d4dd907694515d0359d7400a29dd5906a2818cf2701de42ffb5,PodSandboxId:b57a3588282b93b88f8b9d3df3776c08df62a86d4f582af445427c6bb37de918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871110000450286,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791c5d07cde00f129109d7f097fa6c8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f7491b6b7f7ebbbbbe0336334918ae99736d6384c7ca8034015683366c74a3,PodSandboxId:ba465b6942cf4804f0c0d206188e77fd77baabf99d6cb17dcdbb0d06649157c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870824133680588,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b0d8b84-d1e3-403a-beea-9b27e59d8ff5 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.277801223Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ad336d6-5031-419f-bcb2-85e59600d7d7 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.277900875Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ad336d6-5031-419f-bcb2-85e59600d7d7 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.279463210Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a042900-7f57-457b-ace1-b762e2197b2b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.279866872Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871673279839881,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a042900-7f57-457b-ace1-b762e2197b2b name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.280459303Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6be3c23f-cdc8-4a7c-a462-e9e5a5693193 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.280532753Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6be3c23f-cdc8-4a7c-a462-e9e5a5693193 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.281018989Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52423b7dc42cb47e4e10213716656ff74ac116d4d366261d104f2b943e25563e,PodSandboxId:e3eff4d2c7e833ce99811d19863712dc9218fa5c7da4df113915eac70a30ff0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122783209017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lgvzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58852f70-36e8-4972-8a6f-b04d8fc9d9f3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dde9daf56a3dc9754f580c4c18d7b55c4119f2233bf60e39b0ded71b7445a6f,PodSandboxId:89b8eb6393180386d1068612123c3527aff06837e994a95acac08e206875d436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122744407520,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpwmm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6bd8b8dc-b176-4d2a-affc-63e02eb9fba3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad9d02a56da37f2e49c89945e9bf78fdfacce940fa2696eecd3766aea79dfc9,PodSandboxId:33c77c1e886c9418d62fd884d2087bdab7773df0cb69188a588f1e7b924b119f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1726871121973030444,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6149617c-dd2c-4ac4-ad1b-80f4943d966f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7763d880d6ff2c443b413ce67006c4b428790b605c2701d6a0b4e56ad863a29,PodSandboxId:c9eca2c35d9a24badbe9e21a37be61baaaf51002c2e16c94c7f3312135abee40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726871121226913532,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7j48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe6b074-a959-4f74-a981-e2303126c3ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dedf7017026a82096fb601bd58964a90661461f66edc0d9182c4c565be292a8,PodSandboxId:6cbd7d5eb20b847c4f1bd9ab3ba3d56ce8c6966ff7be4bb52ef63a7338c53fb3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871110057623483,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e058f700d55f306a119fa5fc03f0b47b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129346b80ec6cb39e996b231d3ceab514c2ce605eec152adbb0c64841a85fce3,PodSandboxId:0ebb4eae3e66091366c1a2c7919095a9c11ca345d076250459751ba7d04ed968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871110072516812,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b75c462fb57f6fca71bdbd88dc4e157f8f9b18582ae2acd1027d8c57a55554,PodSandboxId:21f494ed38130dd0a2f327b2bb9b00d922c1d889149bc6f0756b9cea5b13fca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871110052217362,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdc3434602c81ab716edc86ab6b373f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1caaaa9c87352d4dd907694515d0359d7400a29dd5906a2818cf2701de42ffb5,PodSandboxId:b57a3588282b93b88f8b9d3df3776c08df62a86d4f582af445427c6bb37de918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871110000450286,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791c5d07cde00f129109d7f097fa6c8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f7491b6b7f7ebbbbbe0336334918ae99736d6384c7ca8034015683366c74a3,PodSandboxId:ba465b6942cf4804f0c0d206188e77fd77baabf99d6cb17dcdbb0d06649157c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870824133680588,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6be3c23f-cdc8-4a7c-a462-e9e5a5693193 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.305849120Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3544d624-0fef-4046-aaf0-54bdc1359e26 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 22:34:33 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:34:33.306163699Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e3eff4d2c7e833ce99811d19863712dc9218fa5c7da4df113915eac70a30ff0d,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-lgvzf,Uid:58852f70-36e8-4972-8a6f-b04d8fc9d9f3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871122489845018,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-lgvzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58852f70-36e8-4972-8a6f-b04d8fc9d9f3,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T22:25:20.674551018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:89b8eb6393180386d1068612123c3527aff06837e994a95acac08e206875d436,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-dpwmm,Uid:6bd8b8dc-b176-4d2a-affc-63e02eb9fba3,Namesp
ace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871122454528986,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpwmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd8b8dc-b176-4d2a-affc-63e02eb9fba3,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T22:25:20.647689586Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6f5c8e51a2ef68a2746ae161584db4924926f77b12726e11af1a34d1684a163d,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-pjcvw,Uid:0b1b708c-664e-4d6c-8da2-79941793d8d9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871121989796644,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-pjcvw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1b708c-664e-4d6c-8da2-79941793d8d9,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Ann
otations:map[string]string{kubernetes.io/config.seen: 2024-09-20T22:25:21.680316847Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33c77c1e886c9418d62fd884d2087bdab7773df0cb69188a588f1e7b924b119f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6149617c-dd2c-4ac4-ad1b-80f4943d966f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871121860985808,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6149617c-dd2c-4ac4-ad1b-80f4943d966f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"con
tainers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-20T22:25:21.554463368Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c9eca2c35d9a24badbe9e21a37be61baaaf51002c2e16c94c7f3312135abee40,Metadata:&PodSandboxMetadata{Name:kube-proxy-h7j48,Uid:0fe6b074-a959-4f74-a981-e2303126c3ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871121050604737,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h7j48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe6b074-a959-4f74-a981-e2303126c3ab,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T22:25:20.134910699Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b57a3588282b93b88f8b9d3df3776c08df62a86d4f582af445427c6bb37de918,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-799783,Uid:791c5d07cde00f129109d7f097fa6c8d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871109838025280,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791c5d07cde00f129109d7f097fa6c8d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 791c5d07cde00f129109d7f097fa6c8d,kubernetes.io/config.seen: 2024-09-20T22:25:09.386361520Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6cbd7d5eb20b847c4f1bd9ab3ba3d56ce8c6966ff7be4bb52ef63a7338c53fb3,Metadata:&PodSandb
oxMetadata{Name:etcd-default-k8s-diff-port-799783,Uid:e058f700d55f306a119fa5fc03f0b47b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871109836426013,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e058f700d55f306a119fa5fc03f0b47b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.94:2379,kubernetes.io/config.hash: e058f700d55f306a119fa5fc03f0b47b,kubernetes.io/config.seen: 2024-09-20T22:25:09.386355520Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:21f494ed38130dd0a2f327b2bb9b00d922c1d889149bc6f0756b9cea5b13fca5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-799783,Uid:1cdc3434602c81ab716edc86ab6b373f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871109830054009,Labels:map[string]string{component
: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdc3434602c81ab716edc86ab6b373f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1cdc3434602c81ab716edc86ab6b373f,kubernetes.io/config.seen: 2024-09-20T22:25:09.386360643Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ebb4eae3e66091366c1a2c7919095a9c11ca345d076250459751ba7d04ed968,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-799783,Uid:ebbf246f93c95e2a5453c35b524e0042,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726871109828715501,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,tier: control-plane,},Annotations:map
[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.94:8444,kubernetes.io/config.hash: ebbf246f93c95e2a5453c35b524e0042,kubernetes.io/config.seen: 2024-09-20T22:25:09.386359343Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=3544d624-0fef-4046-aaf0-54bdc1359e26 name=/runtime.v1.RuntimeService/ListPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	52423b7dc42cb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   e3eff4d2c7e83       coredns-7c65d6cfc9-lgvzf
	5dde9daf56a3d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   89b8eb6393180       coredns-7c65d6cfc9-dpwmm
	aad9d02a56da3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   33c77c1e886c9       storage-provisioner
	c7763d880d6ff       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   c9eca2c35d9a2       kube-proxy-h7j48
	129346b80ec6c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   0ebb4eae3e660       kube-apiserver-default-k8s-diff-port-799783
	3dedf7017026a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   6cbd7d5eb20b8       etcd-default-k8s-diff-port-799783
	c8b75c462fb57       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   21f494ed38130       kube-controller-manager-default-k8s-diff-port-799783
	1caaaa9c87352       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   b57a3588282b9       kube-scheduler-default-k8s-diff-port-799783
	f1f7491b6b7f7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   ba465b6942cf4       kube-apiserver-default-k8s-diff-port-799783
	
	
	==> coredns [52423b7dc42cb47e4e10213716656ff74ac116d4d366261d104f2b943e25563e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [5dde9daf56a3dc9754f580c4c18d7b55c4119f2233bf60e39b0ded71b7445a6f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-799783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-799783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=default-k8s-diff-port-799783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T22_25_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 22:25:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-799783
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 22:34:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 22:30:32 +0000   Fri, 20 Sep 2024 22:25:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 22:30:32 +0000   Fri, 20 Sep 2024 22:25:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 22:30:32 +0000   Fri, 20 Sep 2024 22:25:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 22:30:32 +0000   Fri, 20 Sep 2024 22:25:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    default-k8s-diff-port-799783
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 73f2972cc5fc48a898fc672ebc64ecf5
	  System UUID:                73f2972c-c5fc-48a8-98fc-672ebc64ecf5
	  Boot ID:                    da1513e0-2a24-4ada-a28c-8fff832983b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dpwmm                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 coredns-7c65d6cfc9-lgvzf                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 etcd-default-k8s-diff-port-799783                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-default-k8s-diff-port-799783             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-799783    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-proxy-h7j48                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-default-k8s-diff-port-799783             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 metrics-server-6867b74b74-pjcvw                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m11s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m24s (x8 over 9m24s)  kubelet          Node default-k8s-diff-port-799783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s (x8 over 9m24s)  kubelet          Node default-k8s-diff-port-799783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s (x7 over 9m24s)  kubelet          Node default-k8s-diff-port-799783 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s                  kubelet          Node default-k8s-diff-port-799783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s                  kubelet          Node default-k8s-diff-port-799783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s                  kubelet          Node default-k8s-diff-port-799783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s                  node-controller  Node default-k8s-diff-port-799783 event: Registered Node default-k8s-diff-port-799783 in Controller
	
	
	==> dmesg <==
	[  +0.051554] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042535] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep20 22:20] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.546359] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.674260] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000036] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.979151] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.057976] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068349] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.247404] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.136620] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.324904] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.305890] systemd-fstab-generator[799]: Ignoring "noauto" option for root device
	[  +0.062195] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.410550] systemd-fstab-generator[918]: Ignoring "noauto" option for root device
	[  +5.613240] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.766699] kauditd_printk_skb: 85 callbacks suppressed
	[Sep20 22:25] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.801126] systemd-fstab-generator[2545]: Ignoring "noauto" option for root device
	[  +4.465228] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.589528] systemd-fstab-generator[2872]: Ignoring "noauto" option for root device
	[  +5.501654] systemd-fstab-generator[3000]: Ignoring "noauto" option for root device
	[  +0.087052] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.124266] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [3dedf7017026a82096fb601bd58964a90661461f66edc0d9182c4c565be292a8] <==
	{"level":"info","ts":"2024-09-20T22:25:10.573528Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T22:25:10.573760Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c23cd90330b5fc4f","initial-advertise-peer-urls":["https://192.168.39.94:2380"],"listen-peer-urls":["https://192.168.39.94:2380"],"advertise-client-urls":["https://192.168.39.94:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.94:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T22:25:10.573800Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T22:25:10.573964Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2024-09-20T22:25:10.574019Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.94:2380"}
	{"level":"info","ts":"2024-09-20T22:25:10.990345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T22:25:10.990421Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T22:25:10.990450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f received MsgPreVoteResp from c23cd90330b5fc4f at term 1"}
	{"level":"info","ts":"2024-09-20T22:25:10.990463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T22:25:10.990469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f received MsgVoteResp from c23cd90330b5fc4f at term 2"}
	{"level":"info","ts":"2024-09-20T22:25:10.990478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became leader at term 2"}
	{"level":"info","ts":"2024-09-20T22:25:10.990485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c23cd90330b5fc4f elected leader c23cd90330b5fc4f at term 2"}
	{"level":"info","ts":"2024-09-20T22:25:10.995362Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:25:10.998487Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c23cd90330b5fc4f","local-member-attributes":"{Name:default-k8s-diff-port-799783 ClientURLs:[https://192.168.39.94:2379]}","request-path":"/0/members/c23cd90330b5fc4f/attributes","cluster-id":"f81fab91992620a9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T22:25:11.000308Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:25:11.000829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:25:11.001375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f81fab91992620a9","local-member-id":"c23cd90330b5fc4f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:25:11.001475Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:25:11.001517Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:25:11.002091Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:25:11.004975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T22:25:11.009322Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T22:25:11.009359Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T22:25:11.009921Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:25:11.014750Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.94:2379"}
	
	
	==> kernel <==
	 22:34:33 up 14 min,  0 users,  load average: 0.07, 0.09, 0.09
	Linux default-k8s-diff-port-799783 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [129346b80ec6cb39e996b231d3ceab514c2ce605eec152adbb0c64841a85fce3] <==
	W0920 22:30:13.788134       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:30:13.788304       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 22:30:13.789416       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:30:13.789495       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 22:31:13.789620       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:31:13.789696       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 22:31:13.789941       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:31:13.790017       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 22:31:13.791320       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:31:13.791329       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 22:33:13.791812       1 handler_proxy.go:99] no RequestInfo found in the context
	W0920 22:33:13.791811       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:33:13.792328       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 22:33:13.792463       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 22:33:13.793610       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:33:13.793671       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [f1f7491b6b7f7ebbbbbe0336334918ae99736d6384c7ca8034015683366c74a3] <==
	W0920 22:25:04.164860       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.174588       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.196470       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.229194       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.259584       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.263191       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.327823       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.353876       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.355431       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.405647       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.434443       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.533937       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.545569       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.565102       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.568660       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.592796       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.596402       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.662757       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.832952       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.871406       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.959558       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:05.074077       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:05.105516       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:05.278845       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:05.397348       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c8b75c462fb57f6fca71bdbd88dc4e157f8f9b18582ae2acd1027d8c57a55554] <==
	E0920 22:29:19.771774       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:29:20.235334       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:29:49.778435       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:29:50.246000       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:30:19.786848       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:30:20.254353       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:30:32.080922       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-799783"
	E0920 22:30:49.794537       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:30:50.263037       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:31:01.326378       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="342.591µs"
	I0920 22:31:13.329809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="82.289µs"
	E0920 22:31:19.802552       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:31:20.272374       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:31:49.809522       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:31:50.280161       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:32:19.817418       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:32:20.287988       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:32:49.825921       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:32:50.296671       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:33:19.832788       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:33:20.305439       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:33:49.841027       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:33:50.313152       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:34:19.848181       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:34:20.321551       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c7763d880d6ff2c443b413ce67006c4b428790b605c2701d6a0b4e56ad863a29] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 22:25:21.608877       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 22:25:21.623982       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.94"]
	E0920 22:25:21.624128       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 22:25:21.711978       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 22:25:21.712101       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 22:25:21.712195       1 server_linux.go:169] "Using iptables Proxier"
	I0920 22:25:21.726537       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 22:25:21.726890       1 server.go:483] "Version info" version="v1.31.1"
	I0920 22:25:21.726924       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 22:25:21.760544       1 config.go:199] "Starting service config controller"
	I0920 22:25:21.760799       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 22:25:21.760919       1 config.go:105] "Starting endpoint slice config controller"
	I0920 22:25:21.760982       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 22:25:21.763644       1 config.go:328] "Starting node config controller"
	I0920 22:25:21.763743       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 22:25:21.862548       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 22:25:21.862610       1 shared_informer.go:320] Caches are synced for service config
	I0920 22:25:21.864064       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1caaaa9c87352d4dd907694515d0359d7400a29dd5906a2818cf2701de42ffb5] <==
	W0920 22:25:12.827129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 22:25:12.827157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:12.827216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 22:25:12.827303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:12.827382       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 22:25:12.827411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:13.723441       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 22:25:13.724193       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 22:25:13.735542       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 22:25:13.735607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:13.745165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 22:25:13.745437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:13.764543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 22:25:13.764681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:13.782097       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 22:25:13.782413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:13.893464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 22:25:13.893584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:13.970032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 22:25:13.970166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:14.059893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 22:25:14.060458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:14.096233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 22:25:14.096413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0920 22:25:15.508557       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 22:33:16 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:33:16.306693    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pjcvw" podUID="0b1b708c-664e-4d6c-8da2-79941793d8d9"
	Sep 20 22:33:25 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:33:25.469772    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871605469488765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:25 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:33:25.469810    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871605469488765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:31 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:33:31.306811    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pjcvw" podUID="0b1b708c-664e-4d6c-8da2-79941793d8d9"
	Sep 20 22:33:35 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:33:35.472227    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871615471871770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:35 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:33:35.472289    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871615471871770,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:42 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:33:42.306916    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pjcvw" podUID="0b1b708c-664e-4d6c-8da2-79941793d8d9"
	Sep 20 22:33:45 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:33:45.474131    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871625473832734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:45 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:33:45.474763    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871625473832734,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:55 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:33:55.476454    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871635476150841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:55 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:33:55.476506    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871635476150841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:33:56 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:33:56.306604    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pjcvw" podUID="0b1b708c-664e-4d6c-8da2-79941793d8d9"
	Sep 20 22:34:05 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:34:05.477763    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871645477527400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:05 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:34:05.477813    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871645477527400,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:09 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:34:09.310663    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pjcvw" podUID="0b1b708c-664e-4d6c-8da2-79941793d8d9"
	Sep 20 22:34:15 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:34:15.355148    2879 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 22:34:15 default-k8s-diff-port-799783 kubelet[2879]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 22:34:15 default-k8s-diff-port-799783 kubelet[2879]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 22:34:15 default-k8s-diff-port-799783 kubelet[2879]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 22:34:15 default-k8s-diff-port-799783 kubelet[2879]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 22:34:15 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:34:15.479580    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871655479350243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:15 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:34:15.479624    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871655479350243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:21 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:34:21.307007    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pjcvw" podUID="0b1b708c-664e-4d6c-8da2-79941793d8d9"
	Sep 20 22:34:25 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:34:25.481471    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871665481091456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:25 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:34:25.481514    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871665481091456,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [aad9d02a56da37f2e49c89945e9bf78fdfacce940fa2696eecd3766aea79dfc9] <==
	I0920 22:25:22.086992       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 22:25:22.099166       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 22:25:22.099208       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 22:25:22.120073       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 22:25:22.122015       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-799783_65ee3fb0-9d76-4dad-943e-714947da403f!
	I0920 22:25:22.126965       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"65cc7b7b-451c-4264-9d63-260b4afb2ebe", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-799783_65ee3fb0-9d76-4dad-943e-714947da403f became leader
	I0920 22:25:22.225321       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-799783_65ee3fb0-9d76-4dad-943e-714947da403f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799783 -n default-k8s-diff-port-799783
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-799783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-pjcvw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-799783 describe pod metrics-server-6867b74b74-pjcvw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-799783 describe pod metrics-server-6867b74b74-pjcvw: exit status 1 (64.543347ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-pjcvw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-799783 describe pod metrics-server-6867b74b74-pjcvw: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0920 22:28:12.211744   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-032884 -n no-preload-032884
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-09-20 22:35:20.108731556 +0000 UTC m=+6460.239866525
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-032884 -n no-preload-032884
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-032884 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-032884 logs -n 25: (2.236106889s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-335786 -- sudo                         | cert-options-335786          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-335786                                 | cert-options-335786          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-696728                           | kubernetes-upgrade-696728    | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:11 UTC |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-032884             | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-440582 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | disable-driver-mounts-440582                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:14 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-562750            | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799783  | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-032884                  | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-593412        | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-562750                 | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC | 20 Sep 24 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799783       | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:25 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-593412             | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC | 20 Sep 24 22:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 22:17:05
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 22:17:05.861648   65164 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:17:05.861774   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.861787   65164 out.go:358] Setting ErrFile to fd 2...
	I0920 22:17:05.861792   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.862021   65164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:17:05.862599   65164 out.go:352] Setting JSON to false
	I0920 22:17:05.863528   65164 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7174,"bootTime":1726863452,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:17:05.863630   65164 start.go:139] virtualization: kvm guest
	I0920 22:17:05.866077   65164 out.go:177] * [old-k8s-version-593412] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:17:05.867484   65164 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:17:05.867531   65164 notify.go:220] Checking for updates...
	I0920 22:17:05.869987   65164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:17:05.871483   65164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:17:05.872745   65164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:17:05.874062   65164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:17:05.875089   65164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:17:05.876578   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:17:05.876975   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.877047   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.891847   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I0920 22:17:05.892301   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.892821   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.892841   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.893134   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.893312   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.895266   65164 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 22:17:05.896322   65164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:17:05.896637   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.896671   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.912301   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37681
	I0920 22:17:05.912667   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.913170   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.913189   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.913475   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.913672   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.949827   65164 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 22:17:05.951258   65164 start.go:297] selected driver: kvm2
	I0920 22:17:05.951275   65164 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.951380   65164 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:17:05.952032   65164 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.952098   65164 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 22:17:05.967645   65164 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 22:17:05.968048   65164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:17:05.968078   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:17:05.968122   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:17:05.968161   65164 start.go:340] cluster config:
	{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.968267   65164 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.970080   65164 out.go:177] * Starting "old-k8s-version-593412" primary control-plane node in "old-k8s-version-593412" cluster
	I0920 22:17:03.331236   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:06.403275   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:05.971180   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:17:05.971218   65164 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 22:17:05.971227   65164 cache.go:56] Caching tarball of preloaded images
	I0920 22:17:05.971292   65164 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 22:17:05.971301   65164 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 22:17:05.971394   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:17:05.971564   65164 start.go:360] acquireMachinesLock for old-k8s-version-593412: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:17:12.483250   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:15.555283   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:21.635247   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:24.707272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:30.787265   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:33.859227   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:39.939281   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:43.011250   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:49.091264   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:52.163272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:58.243272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:01.315252   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:07.395256   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:10.467248   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:16.547257   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:19.619309   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:25.699278   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:28.771275   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:34.851234   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:37.923271   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:44.003246   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:47.075274   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:53.155254   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:56.227303   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:02.307307   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:05.379257   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:11.459278   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:14.531267   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:20.611255   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:23.683284   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:29.763270   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:32.835325   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:35.840483   64543 start.go:364] duration metric: took 3m48.294081485s to acquireMachinesLock for "embed-certs-562750"
	I0920 22:19:35.840533   64543 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:19:35.840542   64543 fix.go:54] fixHost starting: 
	I0920 22:19:35.840886   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:19:35.840934   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:19:35.856524   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I0920 22:19:35.856933   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:19:35.857405   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:19:35.857419   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:19:35.857754   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:19:35.857958   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:35.858095   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:19:35.859845   64543 fix.go:112] recreateIfNeeded on embed-certs-562750: state=Stopped err=<nil>
	I0920 22:19:35.859876   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	W0920 22:19:35.860023   64543 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:19:35.861973   64543 out.go:177] * Restarting existing kvm2 VM for "embed-certs-562750" ...
	I0920 22:19:35.863394   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Start
	I0920 22:19:35.863563   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring networks are active...
	I0920 22:19:35.864411   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring network default is active
	I0920 22:19:35.864746   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring network mk-embed-certs-562750 is active
	I0920 22:19:35.865085   64543 main.go:141] libmachine: (embed-certs-562750) Getting domain xml...
	I0920 22:19:35.865728   64543 main.go:141] libmachine: (embed-certs-562750) Creating domain...
	I0920 22:19:37.094096   64543 main.go:141] libmachine: (embed-certs-562750) Waiting to get IP...
	I0920 22:19:37.095056   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.095580   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.095671   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.095567   66127 retry.go:31] will retry after 248.155801ms: waiting for machine to come up
	I0920 22:19:37.345108   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.345577   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.345634   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.345559   66127 retry.go:31] will retry after 340.822818ms: waiting for machine to come up
	I0920 22:19:35.837792   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:19:35.837839   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:19:35.838152   64076 buildroot.go:166] provisioning hostname "no-preload-032884"
	I0920 22:19:35.838177   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:19:35.838385   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:19:35.840336   64076 machine.go:96] duration metric: took 4m37.421385706s to provisionDockerMachine
	I0920 22:19:35.840382   64076 fix.go:56] duration metric: took 4m37.44364588s for fixHost
	I0920 22:19:35.840392   64076 start.go:83] releasing machines lock for "no-preload-032884", held for 4m37.443690605s
	W0920 22:19:35.840414   64076 start.go:714] error starting host: provision: host is not running
	W0920 22:19:35.840512   64076 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 22:19:35.840521   64076 start.go:729] Will try again in 5 seconds ...
	I0920 22:19:37.688209   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.688587   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.688615   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.688534   66127 retry.go:31] will retry after 321.605412ms: waiting for machine to come up
	I0920 22:19:38.012304   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:38.012829   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:38.012853   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:38.012780   66127 retry.go:31] will retry after 475.245269ms: waiting for machine to come up
	I0920 22:19:38.489354   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:38.489733   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:38.489763   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:38.489687   66127 retry.go:31] will retry after 667.725945ms: waiting for machine to come up
	I0920 22:19:39.158494   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:39.158997   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:39.159024   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:39.158941   66127 retry.go:31] will retry after 582.98832ms: waiting for machine to come up
	I0920 22:19:39.743778   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:39.744147   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:39.744184   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:39.744092   66127 retry.go:31] will retry after 849.746424ms: waiting for machine to come up
	I0920 22:19:40.595098   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:40.595489   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:40.595518   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:40.595440   66127 retry.go:31] will retry after 1.349776305s: waiting for machine to come up
	I0920 22:19:41.947028   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:41.947449   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:41.947472   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:41.947422   66127 retry.go:31] will retry after 1.323416137s: waiting for machine to come up
	I0920 22:19:40.840969   64076 start.go:360] acquireMachinesLock for no-preload-032884: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:19:43.272963   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:43.273397   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:43.273424   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:43.273344   66127 retry.go:31] will retry after 1.986214907s: waiting for machine to come up
	I0920 22:19:45.262662   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:45.263148   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:45.263170   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:45.263095   66127 retry.go:31] will retry after 2.68512379s: waiting for machine to come up
	I0920 22:19:47.950652   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:47.951043   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:47.951069   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:47.950977   66127 retry.go:31] will retry after 3.616930237s: waiting for machine to come up
	I0920 22:19:51.569337   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:51.569741   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:51.569774   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:51.569692   66127 retry.go:31] will retry after 2.803702381s: waiting for machine to come up
	I0920 22:19:55.615900   64930 start.go:364] duration metric: took 3m10.214658006s to acquireMachinesLock for "default-k8s-diff-port-799783"
	I0920 22:19:55.615986   64930 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:19:55.615995   64930 fix.go:54] fixHost starting: 
	I0920 22:19:55.616427   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:19:55.616483   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:19:55.636346   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0920 22:19:55.636780   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:19:55.637258   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:19:55.637279   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:19:55.637578   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:19:55.637770   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:19:55.637911   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:19:55.639942   64930 fix.go:112] recreateIfNeeded on default-k8s-diff-port-799783: state=Stopped err=<nil>
	I0920 22:19:55.639973   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	W0920 22:19:55.640144   64930 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:19:55.642492   64930 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-799783" ...
	I0920 22:19:54.376796   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.377333   64543 main.go:141] libmachine: (embed-certs-562750) Found IP for machine: 192.168.50.193
	I0920 22:19:54.377353   64543 main.go:141] libmachine: (embed-certs-562750) Reserving static IP address...
	I0920 22:19:54.377363   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has current primary IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.377737   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "embed-certs-562750", mac: "52:54:00:be:17:82", ip: "192.168.50.193"} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.377768   64543 main.go:141] libmachine: (embed-certs-562750) DBG | skip adding static IP to network mk-embed-certs-562750 - found existing host DHCP lease matching {name: "embed-certs-562750", mac: "52:54:00:be:17:82", ip: "192.168.50.193"}
	I0920 22:19:54.377787   64543 main.go:141] libmachine: (embed-certs-562750) Reserved static IP address: 192.168.50.193
	I0920 22:19:54.377804   64543 main.go:141] libmachine: (embed-certs-562750) Waiting for SSH to be available...
	I0920 22:19:54.377849   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Getting to WaitForSSH function...
	I0920 22:19:54.379913   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.380197   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.380236   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.380325   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Using SSH client type: external
	I0920 22:19:54.380350   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa (-rw-------)
	I0920 22:19:54.380382   64543 main.go:141] libmachine: (embed-certs-562750) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:19:54.380395   64543 main.go:141] libmachine: (embed-certs-562750) DBG | About to run SSH command:
	I0920 22:19:54.380405   64543 main.go:141] libmachine: (embed-certs-562750) DBG | exit 0
	I0920 22:19:54.507262   64543 main.go:141] libmachine: (embed-certs-562750) DBG | SSH cmd err, output: <nil>: 
	I0920 22:19:54.507667   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetConfigRaw
	I0920 22:19:54.508332   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:54.510768   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.511108   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.511146   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.511441   64543 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/config.json ...
	I0920 22:19:54.511672   64543 machine.go:93] provisionDockerMachine start ...
	I0920 22:19:54.511690   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:54.511883   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.514116   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.514496   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.514524   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.514646   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.514794   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.514931   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.515053   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.515206   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.515457   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.515472   64543 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:19:54.623421   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:19:54.623455   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.623694   64543 buildroot.go:166] provisioning hostname "embed-certs-562750"
	I0920 22:19:54.623717   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.623843   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.626435   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.626733   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.626754   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.626924   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.627122   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.627293   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.627436   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.627594   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.627749   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.627760   64543 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-562750 && echo "embed-certs-562750" | sudo tee /etc/hostname
	I0920 22:19:54.750707   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-562750
	
	I0920 22:19:54.750729   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.753239   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.753512   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.753544   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.753764   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.753968   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.754153   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.754312   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.754481   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.754650   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.754666   64543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-562750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-562750/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-562750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:19:54.872899   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:19:54.872925   64543 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:19:54.872961   64543 buildroot.go:174] setting up certificates
	I0920 22:19:54.872971   64543 provision.go:84] configureAuth start
	I0920 22:19:54.872979   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.873237   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:54.875851   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.876280   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.876302   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.876471   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.878729   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.879117   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.879143   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.879320   64543 provision.go:143] copyHostCerts
	I0920 22:19:54.879370   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:19:54.879378   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:19:54.879467   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:19:54.879581   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:19:54.879591   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:19:54.879624   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:19:54.879694   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:19:54.879703   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:19:54.879736   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:19:54.879799   64543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.embed-certs-562750 san=[127.0.0.1 192.168.50.193 embed-certs-562750 localhost minikube]
	I0920 22:19:54.981669   64543 provision.go:177] copyRemoteCerts
	I0920 22:19:54.981737   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:19:54.981762   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.984500   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.984792   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.984814   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.984986   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.985187   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.985305   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.985424   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.069012   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 22:19:55.094417   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:19:55.119895   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:19:55.145399   64543 provision.go:87] duration metric: took 272.417168ms to configureAuth
	I0920 22:19:55.145433   64543 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:19:55.145627   64543 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:19:55.145719   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.148502   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.148866   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.148891   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.149082   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.149272   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.149399   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.149528   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.149637   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:55.149785   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:55.149796   64543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:19:55.374395   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:19:55.374422   64543 machine.go:96] duration metric: took 862.73711ms to provisionDockerMachine
	I0920 22:19:55.374436   64543 start.go:293] postStartSetup for "embed-certs-562750" (driver="kvm2")
	I0920 22:19:55.374448   64543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:19:55.374463   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.374798   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:19:55.374815   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.377262   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.377649   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.377671   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.377856   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.378054   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.378203   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.378327   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.462119   64543 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:19:55.466515   64543 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:19:55.466544   64543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:19:55.466621   64543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:19:55.466720   64543 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:19:55.466846   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:19:55.476703   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:19:55.501943   64543 start.go:296] duration metric: took 127.489738ms for postStartSetup
	I0920 22:19:55.501987   64543 fix.go:56] duration metric: took 19.661445548s for fixHost
	I0920 22:19:55.502009   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.505054   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.505394   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.505420   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.505606   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.505826   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.505969   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.506160   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.506352   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:55.506532   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:55.506541   64543 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:19:55.615752   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870795.575298210
	
	I0920 22:19:55.615773   64543 fix.go:216] guest clock: 1726870795.575298210
	I0920 22:19:55.615781   64543 fix.go:229] Guest: 2024-09-20 22:19:55.57529821 +0000 UTC Remote: 2024-09-20 22:19:55.501991982 +0000 UTC m=+248.100299917 (delta=73.306228ms)
	I0920 22:19:55.615805   64543 fix.go:200] guest clock delta is within tolerance: 73.306228ms
	I0920 22:19:55.615812   64543 start.go:83] releasing machines lock for "embed-certs-562750", held for 19.7753011s
	I0920 22:19:55.615850   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.616131   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:55.618697   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.619117   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.619144   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.619347   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.619859   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.620028   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.620110   64543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:19:55.620162   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.620261   64543 ssh_runner.go:195] Run: cat /version.json
	I0920 22:19:55.620287   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.622796   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623204   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623233   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.623254   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623440   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.623601   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.623660   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.623681   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623769   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.623880   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.623941   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.624046   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.624175   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.624291   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.731482   64543 ssh_runner.go:195] Run: systemctl --version
	I0920 22:19:55.738176   64543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:19:55.887545   64543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:19:55.894532   64543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:19:55.894610   64543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:19:55.911070   64543 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:19:55.911097   64543 start.go:495] detecting cgroup driver to use...
	I0920 22:19:55.911178   64543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:19:55.928676   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:19:55.943462   64543 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:19:55.943543   64543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:19:55.960910   64543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:19:55.978109   64543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:19:56.101434   64543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:19:56.238737   64543 docker.go:233] disabling docker service ...
	I0920 22:19:56.238814   64543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:19:56.254099   64543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:19:56.268926   64543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:19:56.417839   64543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:19:56.548769   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:19:56.563311   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:19:56.583073   64543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:19:56.583134   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.598212   64543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:19:56.598287   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.610151   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.623499   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.637734   64543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:19:56.649751   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.665619   64543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.689725   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.704698   64543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:19:56.716205   64543 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:19:56.716276   64543 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:19:56.734247   64543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:19:56.745725   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:19:56.879084   64543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:19:56.978332   64543 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:19:56.978400   64543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:19:56.985047   64543 start.go:563] Will wait 60s for crictl version
	I0920 22:19:56.985112   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:19:56.989231   64543 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:19:57.029037   64543 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:19:57.029126   64543 ssh_runner.go:195] Run: crio --version
	I0920 22:19:57.060838   64543 ssh_runner.go:195] Run: crio --version
	I0920 22:19:57.092838   64543 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:19:57.094132   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:57.097205   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:57.097597   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:57.097642   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:57.097865   64543 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 22:19:57.102433   64543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:19:57.116015   64543 kubeadm.go:883] updating cluster {Name:embed-certs-562750 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:19:57.116134   64543 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:19:57.116197   64543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:19:57.156317   64543 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:19:57.156380   64543 ssh_runner.go:195] Run: which lz4
	I0920 22:19:57.160502   64543 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:19:57.164982   64543 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:19:57.165024   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 22:19:55.643600   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Start
	I0920 22:19:55.643811   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring networks are active...
	I0920 22:19:55.644651   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring network default is active
	I0920 22:19:55.644941   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring network mk-default-k8s-diff-port-799783 is active
	I0920 22:19:55.645308   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Getting domain xml...
	I0920 22:19:55.645897   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Creating domain...
	I0920 22:19:56.961175   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting to get IP...
	I0920 22:19:56.961986   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:56.962442   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:56.962522   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:56.962429   66271 retry.go:31] will retry after 219.744487ms: waiting for machine to come up
	I0920 22:19:57.184180   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.184718   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.184751   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.184663   66271 retry.go:31] will retry after 331.58082ms: waiting for machine to come up
	I0920 22:19:57.518314   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.518907   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.518945   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.518878   66271 retry.go:31] will retry after 308.110588ms: waiting for machine to come up
	I0920 22:19:57.829213   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.829754   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.829782   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.829711   66271 retry.go:31] will retry after 421.186303ms: waiting for machine to come up
	I0920 22:19:58.252218   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.252662   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.252689   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:58.252618   66271 retry.go:31] will retry after 465.325985ms: waiting for machine to come up
	I0920 22:19:58.719232   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.719741   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.719769   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:58.719679   66271 retry.go:31] will retry after 865.976725ms: waiting for machine to come up
	I0920 22:19:59.587325   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:59.587694   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:59.587725   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:59.587654   66271 retry.go:31] will retry after 1.114189927s: waiting for machine to come up
	I0920 22:19:58.663210   64543 crio.go:462] duration metric: took 1.502738525s to copy over tarball
	I0920 22:19:58.663299   64543 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:00.798877   64543 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.135542137s)
	I0920 22:20:00.798912   64543 crio.go:469] duration metric: took 2.135668164s to extract the tarball
	I0920 22:20:00.798922   64543 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:00.838877   64543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:00.884361   64543 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:20:00.884383   64543 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:20:00.884391   64543 kubeadm.go:934] updating node { 192.168.50.193 8443 v1.31.1 crio true true} ...
	I0920 22:20:00.884492   64543 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-562750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:00.884559   64543 ssh_runner.go:195] Run: crio config
	I0920 22:20:00.930933   64543 cni.go:84] Creating CNI manager for ""
	I0920 22:20:00.930987   64543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:00.931000   64543 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:00.931026   64543 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.193 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-562750 NodeName:embed-certs-562750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:20:00.931179   64543 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-562750"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:00.931237   64543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:20:00.941926   64543 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:00.942020   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:00.952488   64543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0920 22:20:00.970386   64543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:00.987879   64543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0920 22:20:01.006743   64543 ssh_runner.go:195] Run: grep 192.168.50.193	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:01.010865   64543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:01.023730   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:01.151649   64543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:01.169768   64543 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750 for IP: 192.168.50.193
	I0920 22:20:01.169804   64543 certs.go:194] generating shared ca certs ...
	I0920 22:20:01.169823   64543 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:01.170011   64543 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:01.170073   64543 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:01.170083   64543 certs.go:256] generating profile certs ...
	I0920 22:20:01.170195   64543 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/client.key
	I0920 22:20:01.170276   64543 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.key.1624b01a
	I0920 22:20:01.170322   64543 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.key
	I0920 22:20:01.170470   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:01.170511   64543 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:01.170524   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:01.170563   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:01.170597   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:01.170625   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:01.170675   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:01.171652   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:01.235898   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:01.276664   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:01.313610   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:01.346451   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 22:20:01.373805   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:20:01.401908   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:01.428305   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:01.460649   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:01.487664   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:01.513122   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:01.539736   64543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:01.559568   64543 ssh_runner.go:195] Run: openssl version
	I0920 22:20:01.565838   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:01.579013   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.584012   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.584090   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.590609   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:01.603863   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:01.616932   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.621762   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.621816   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.627823   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:01.640444   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:01.654236   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.659030   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.659087   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.665104   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:01.678337   64543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:01.683177   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:01.689492   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:01.695839   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:01.702517   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:01.709021   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:01.715624   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:01.721888   64543 kubeadm.go:392] StartCluster: {Name:embed-certs-562750 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:01.721981   64543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:01.722029   64543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:01.771458   64543 cri.go:89] found id: ""
	I0920 22:20:01.771526   64543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:01.785814   64543 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:01.785833   64543 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:01.785887   64543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:01.796192   64543 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:01.797197   64543 kubeconfig.go:125] found "embed-certs-562750" server: "https://192.168.50.193:8443"
	I0920 22:20:01.799185   64543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:01.809313   64543 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.193
	I0920 22:20:01.809345   64543 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:01.809373   64543 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:01.809437   64543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:01.846530   64543 cri.go:89] found id: ""
	I0920 22:20:01.846605   64543 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:01.864522   64543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:01.874792   64543 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:01.874817   64543 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:01.874877   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:20:01.884330   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:01.884401   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:01.893899   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:20:01.903119   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:01.903176   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:01.913163   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:20:01.924161   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:01.924216   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:01.935472   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:20:01.944837   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:01.944916   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:01.955372   64543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:01.966120   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:02.094557   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:00.703843   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:00.704393   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:00.704421   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:00.704320   66271 retry.go:31] will retry after 1.075959086s: waiting for machine to come up
	I0920 22:20:01.781352   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:01.781785   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:01.781815   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:01.781745   66271 retry.go:31] will retry after 1.355153165s: waiting for machine to come up
	I0920 22:20:03.139404   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:03.139900   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:03.139926   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:03.139858   66271 retry.go:31] will retry after 1.818194289s: waiting for machine to come up
	I0920 22:20:04.959670   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:04.960173   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:04.960242   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:04.960146   66271 retry.go:31] will retry after 2.18132962s: waiting for machine to come up
	I0920 22:20:03.143089   64543 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.048498511s)
	I0920 22:20:03.143125   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.364380   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.431644   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.547063   64543 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:03.547154   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.047320   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.548259   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.577405   64543 api_server.go:72] duration metric: took 1.03034068s to wait for apiserver process to appear ...
	I0920 22:20:04.577433   64543 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:20:04.577456   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.359356   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:07.359397   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:07.359415   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.433696   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:07.433729   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:07.578317   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.605834   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:07.605869   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:08.078218   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:08.083148   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:08.083174   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:08.577638   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:08.583009   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:08.583039   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:09.077549   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:09.081990   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 200:
	ok
	I0920 22:20:09.088232   64543 api_server.go:141] control plane version: v1.31.1
	I0920 22:20:09.088256   64543 api_server.go:131] duration metric: took 4.510816425s to wait for apiserver health ...
	I0920 22:20:09.088263   64543 cni.go:84] Creating CNI manager for ""
	I0920 22:20:09.088271   64543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:09.090388   64543 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:20:07.143268   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:07.143767   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:07.143806   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:07.143720   66271 retry.go:31] will retry after 3.514856392s: waiting for machine to come up
	I0920 22:20:09.091750   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:20:09.103171   64543 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:20:09.123617   64543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:20:09.135750   64543 system_pods.go:59] 8 kube-system pods found
	I0920 22:20:09.135778   64543 system_pods.go:61] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:20:09.135790   64543 system_pods.go:61] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:20:09.135800   64543 system_pods.go:61] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:20:09.135809   64543 system_pods.go:61] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:20:09.135815   64543 system_pods.go:61] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:20:09.135838   64543 system_pods.go:61] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:20:09.135843   64543 system_pods.go:61] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:20:09.135851   64543 system_pods.go:61] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:20:09.135864   64543 system_pods.go:74] duration metric: took 12.228085ms to wait for pod list to return data ...
	I0920 22:20:09.135870   64543 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:20:09.140570   64543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:20:09.140598   64543 node_conditions.go:123] node cpu capacity is 2
	I0920 22:20:09.140609   64543 node_conditions.go:105] duration metric: took 4.732339ms to run NodePressure ...
	I0920 22:20:09.140625   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:09.411017   64543 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:20:09.423634   64543 kubeadm.go:739] kubelet initialised
	I0920 22:20:09.423657   64543 kubeadm.go:740] duration metric: took 12.617105ms waiting for restarted kubelet to initialise ...
	I0920 22:20:09.423665   64543 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:09.427934   64543 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.438355   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.438377   64543 pod_ready.go:82] duration metric: took 10.418416ms for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.438386   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.438403   64543 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.447837   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "etcd-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.447868   64543 pod_ready.go:82] duration metric: took 9.456346ms for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.447877   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "etcd-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.447886   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.456304   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.456328   64543 pod_ready.go:82] duration metric: took 8.434885ms for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.456338   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.456344   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.528428   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.528451   64543 pod_ready.go:82] duration metric: took 72.099227ms for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.528460   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.528465   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.927405   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-proxy-wsrnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.927434   64543 pod_ready.go:82] duration metric: took 398.961394ms for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.927443   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-proxy-wsrnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.927449   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:10.327103   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.327128   64543 pod_ready.go:82] duration metric: took 399.672325ms for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:10.327140   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.327153   64543 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:10.727093   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.727121   64543 pod_ready.go:82] duration metric: took 399.958241ms for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:10.727130   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.727137   64543 pod_ready.go:39] duration metric: took 1.303464561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:10.727154   64543 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:20:10.741332   64543 ops.go:34] apiserver oom_adj: -16
	I0920 22:20:10.741356   64543 kubeadm.go:597] duration metric: took 8.955512402s to restartPrimaryControlPlane
	I0920 22:20:10.741367   64543 kubeadm.go:394] duration metric: took 9.019487096s to StartCluster
	I0920 22:20:10.741382   64543 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:10.741459   64543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:20:10.744903   64543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:10.745155   64543 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:20:10.745198   64543 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:20:10.745344   64543 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-562750"
	I0920 22:20:10.745366   64543 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-562750"
	W0920 22:20:10.745375   64543 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:20:10.745376   64543 addons.go:69] Setting metrics-server=true in profile "embed-certs-562750"
	I0920 22:20:10.745398   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.745397   64543 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:10.745409   64543 addons.go:234] Setting addon metrics-server=true in "embed-certs-562750"
	W0920 22:20:10.745419   64543 addons.go:243] addon metrics-server should already be in state true
	I0920 22:20:10.745344   64543 addons.go:69] Setting default-storageclass=true in profile "embed-certs-562750"
	I0920 22:20:10.745451   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.745468   64543 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-562750"
	I0920 22:20:10.745705   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745734   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745773   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.745853   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745943   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.746025   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.747255   64543 out.go:177] * Verifying Kubernetes components...
	I0920 22:20:10.749042   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:10.761119   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0920 22:20:10.761143   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0920 22:20:10.761143   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0920 22:20:10.761517   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.761598   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.761643   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.762025   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762040   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762150   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762157   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762169   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762172   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762352   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762535   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762545   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762611   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.763136   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.763137   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.763182   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.763208   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.766161   64543 addons.go:234] Setting addon default-storageclass=true in "embed-certs-562750"
	W0920 22:20:10.766184   64543 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:20:10.766215   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.766587   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.766645   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.779678   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
	I0920 22:20:10.780153   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.780716   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.780739   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.781116   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.781184   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45417
	I0920 22:20:10.781406   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.781548   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.782023   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.782037   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.782492   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.782496   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0920 22:20:10.782988   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.783141   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.783174   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.783312   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.783458   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.783479   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.783861   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.784058   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.785625   64543 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:10.785682   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.787202   64543 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:20:10.787263   64543 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:20:10.787276   64543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:20:10.787293   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.788800   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:20:10.788820   64543 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:20:10.788838   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.790560   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.791094   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.791126   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.791400   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.791591   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.791735   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.791873   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.792691   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.793146   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.793161   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.793440   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.793640   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.793776   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.793904   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.801231   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36363
	I0920 22:20:10.801612   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.802055   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.802078   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.802442   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.802720   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.804333   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.804544   64543 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:20:10.804561   64543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:20:10.804574   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.807883   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.808403   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.808425   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.808636   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.808809   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.808949   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.809102   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.974334   64543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:10.997919   64543 node_ready.go:35] waiting up to 6m0s for node "embed-certs-562750" to be "Ready" ...
	I0920 22:20:11.061767   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:20:11.079795   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:20:11.079817   64543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:20:11.097101   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:20:11.144947   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:20:11.144971   64543 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:20:11.217680   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:20:11.217704   64543 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:20:11.269307   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:20:12.132770   64543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070956267s)
	I0920 22:20:12.132827   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.132840   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.132840   64543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.035704985s)
	I0920 22:20:12.132882   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.132898   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133151   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133183   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133202   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133217   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.133221   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133225   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133248   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133262   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133285   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.133296   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133440   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133456   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133466   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133523   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133525   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133534   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.139359   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.139373   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.139612   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.139628   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.139643   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177345   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.177367   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.177669   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177689   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.177705   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.177715   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.177724   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.177933   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177974   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.177984   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.177998   64543 addons.go:475] Verifying addon metrics-server=true in "embed-certs-562750"
	I0920 22:20:12.180228   64543 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 22:20:12.181532   64543 addons.go:510] duration metric: took 1.436346895s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 22:20:15.124170   65164 start.go:364] duration metric: took 3m9.152575828s to acquireMachinesLock for "old-k8s-version-593412"
	I0920 22:20:15.124246   65164 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:20:15.124258   65164 fix.go:54] fixHost starting: 
	I0920 22:20:15.124618   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:15.124669   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:15.144438   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0920 22:20:15.144935   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:15.145417   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:20:15.145446   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:15.145757   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:15.145943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:15.146103   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetState
	I0920 22:20:15.147740   65164 fix.go:112] recreateIfNeeded on old-k8s-version-593412: state=Stopped err=<nil>
	I0920 22:20:15.147770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	W0920 22:20:15.147919   65164 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:20:15.150120   65164 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-593412" ...
	I0920 22:20:10.659920   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:10.660290   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:10.660324   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:10.660245   66271 retry.go:31] will retry after 3.009712634s: waiting for machine to come up
	I0920 22:20:13.672538   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.673101   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has current primary IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.673132   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Found IP for machine: 192.168.39.94
	I0920 22:20:13.673144   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Reserving static IP address...
	I0920 22:20:13.673601   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799783", mac: "52:54:00:30:89:df", ip: "192.168.39.94"} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.673657   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | skip adding static IP to network mk-default-k8s-diff-port-799783 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799783", mac: "52:54:00:30:89:df", ip: "192.168.39.94"}
	I0920 22:20:13.673673   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Reserved static IP address: 192.168.39.94
	I0920 22:20:13.673692   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for SSH to be available...
	I0920 22:20:13.673706   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Getting to WaitForSSH function...
	I0920 22:20:13.675591   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.675941   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.675961   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.676120   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Using SSH client type: external
	I0920 22:20:13.676150   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa (-rw-------)
	I0920 22:20:13.676183   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:13.676199   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | About to run SSH command:
	I0920 22:20:13.676212   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | exit 0
	I0920 22:20:13.803369   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:13.803683   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetConfigRaw
	I0920 22:20:13.804303   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:13.806544   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.806902   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.806932   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.807178   64930 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/config.json ...
	I0920 22:20:13.807442   64930 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:13.807465   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:13.807682   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:13.809783   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.810027   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.810049   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.810221   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:13.810472   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.810628   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.810727   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:13.810875   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:13.811088   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:13.811112   64930 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:13.923218   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:13.923249   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:13.923489   64930 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799783"
	I0920 22:20:13.923512   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:13.923657   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:13.926007   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.926408   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.926439   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.926613   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:13.926775   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.926915   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.927071   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:13.927227   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:13.927389   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:13.927401   64930 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799783 && echo "default-k8s-diff-port-799783" | sudo tee /etc/hostname
	I0920 22:20:14.058656   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799783
	
	I0920 22:20:14.058686   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.061487   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.061914   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.061950   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.062157   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.062352   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.062515   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.062675   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.062840   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:14.063061   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:14.063087   64930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799783/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:14.190755   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:14.190785   64930 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:14.190818   64930 buildroot.go:174] setting up certificates
	I0920 22:20:14.190831   64930 provision.go:84] configureAuth start
	I0920 22:20:14.190848   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:14.191114   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:14.194083   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.194504   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.194533   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.194705   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.197104   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.197415   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.197453   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.197649   64930 provision.go:143] copyHostCerts
	I0920 22:20:14.197710   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:14.197733   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:14.197797   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:14.197911   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:14.197924   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:14.197952   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:14.198025   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:14.198035   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:14.198062   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:14.198128   64930 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799783 san=[127.0.0.1 192.168.39.94 default-k8s-diff-port-799783 localhost minikube]
	I0920 22:20:14.450887   64930 provision.go:177] copyRemoteCerts
	I0920 22:20:14.450953   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:14.451010   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.453547   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.453849   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.453873   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.454064   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.454256   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.454424   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.454547   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:14.541380   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:14.566560   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 22:20:14.596742   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:20:14.629406   64930 provision.go:87] duration metric: took 438.556575ms to configureAuth
	I0920 22:20:14.629438   64930 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:14.629656   64930 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:14.629731   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.632710   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.633129   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.633160   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.633403   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.633638   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.633823   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.633968   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.634151   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:14.634380   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:14.634404   64930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:14.870302   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:14.870327   64930 machine.go:96] duration metric: took 1.062871025s to provisionDockerMachine
	I0920 22:20:14.870338   64930 start.go:293] postStartSetup for "default-k8s-diff-port-799783" (driver="kvm2")
	I0920 22:20:14.870348   64930 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:14.870363   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:14.870675   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:14.870703   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.873574   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.873946   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.873990   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.874110   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.874306   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.874460   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.874604   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:14.963054   64930 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:14.967939   64930 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:14.967973   64930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:14.968056   64930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:14.968155   64930 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:14.968292   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:14.978976   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:15.005989   64930 start.go:296] duration metric: took 135.637475ms for postStartSetup
	I0920 22:20:15.006028   64930 fix.go:56] duration metric: took 19.390035006s for fixHost
	I0920 22:20:15.006046   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.008572   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.008972   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.009003   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.009176   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.009355   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.009516   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.009637   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.009793   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:15.009948   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:15.009958   64930 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:15.124032   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870815.095821045
	
	I0920 22:20:15.124057   64930 fix.go:216] guest clock: 1726870815.095821045
	I0920 22:20:15.124067   64930 fix.go:229] Guest: 2024-09-20 22:20:15.095821045 +0000 UTC Remote: 2024-09-20 22:20:15.006032337 +0000 UTC m=+209.748283721 (delta=89.788708ms)
	I0920 22:20:15.124093   64930 fix.go:200] guest clock delta is within tolerance: 89.788708ms
	I0920 22:20:15.124099   64930 start.go:83] releasing machines lock for "default-k8s-diff-port-799783", held for 19.50814158s
	I0920 22:20:15.124126   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.124385   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:15.127159   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.127522   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.127553   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.127724   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128290   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128438   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128550   64930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:15.128593   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.128689   64930 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:15.128707   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.131485   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.131862   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.131934   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.131963   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.132143   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.132343   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.132377   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.132418   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.132561   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.132624   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.132703   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.132789   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:15.132812   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.132967   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:15.239910   64930 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:15.247781   64930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:15.151593   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .Start
	I0920 22:20:15.151807   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring networks are active...
	I0920 22:20:15.152666   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network default is active
	I0920 22:20:15.153017   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network mk-old-k8s-version-593412 is active
	I0920 22:20:15.153520   65164 main.go:141] libmachine: (old-k8s-version-593412) Getting domain xml...
	I0920 22:20:15.154306   65164 main.go:141] libmachine: (old-k8s-version-593412) Creating domain...
	I0920 22:20:15.405701   64930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:15.413454   64930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:15.413546   64930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:15.431606   64930 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:15.431631   64930 start.go:495] detecting cgroup driver to use...
	I0920 22:20:15.431698   64930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:15.450077   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:15.464156   64930 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:15.464229   64930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:15.483489   64930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:15.498873   64930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:15.614354   64930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:15.817412   64930 docker.go:233] disabling docker service ...
	I0920 22:20:15.817495   64930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:15.833184   64930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:15.849521   64930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:15.997696   64930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:16.138329   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:16.153302   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:16.172896   64930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:20:16.172967   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.184383   64930 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:16.184448   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.196869   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.208295   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.220098   64930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:16.231777   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.243180   64930 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.263881   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.275250   64930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:16.291764   64930 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:16.291839   64930 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:16.307082   64930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:16.318243   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:16.459321   64930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:16.563304   64930 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:16.563369   64930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:16.568520   64930 start.go:563] Will wait 60s for crictl version
	I0920 22:20:16.568586   64930 ssh_runner.go:195] Run: which crictl
	I0920 22:20:16.573573   64930 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:16.617596   64930 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:16.617692   64930 ssh_runner.go:195] Run: crio --version
	I0920 22:20:16.656700   64930 ssh_runner.go:195] Run: crio --version
	I0920 22:20:16.691226   64930 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:20:13.001784   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:15.502542   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:16.692645   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:16.695792   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:16.696162   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:16.696189   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:16.696457   64930 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:16.700827   64930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:16.713577   64930 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-799783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:16.713724   64930 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:20:16.713798   64930 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:16.753538   64930 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:20:16.753621   64930 ssh_runner.go:195] Run: which lz4
	I0920 22:20:16.758261   64930 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:20:16.763234   64930 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:20:16.763282   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 22:20:18.277256   64930 crio.go:462] duration metric: took 1.519010373s to copy over tarball
	I0920 22:20:18.277326   64930 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:16.512533   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting to get IP...
	I0920 22:20:16.513609   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.514030   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.514105   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.514023   66497 retry.go:31] will retry after 283.450738ms: waiting for machine to come up
	I0920 22:20:16.799590   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.800093   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.800125   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.800032   66497 retry.go:31] will retry after 325.332719ms: waiting for machine to come up
	I0920 22:20:17.127313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.127784   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.127809   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.127762   66497 retry.go:31] will retry after 407.300726ms: waiting for machine to come up
	I0920 22:20:17.537352   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.537853   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.537884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.537804   66497 retry.go:31] will retry after 376.483945ms: waiting for machine to come up
	I0920 22:20:17.916227   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.916679   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.916706   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.916646   66497 retry.go:31] will retry after 471.842767ms: waiting for machine to come up
	I0920 22:20:18.390326   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:18.390817   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:18.390862   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:18.390788   66497 retry.go:31] will retry after 897.259517ms: waiting for machine to come up
	I0920 22:20:19.289834   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:19.290347   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:19.290376   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:19.290289   66497 retry.go:31] will retry after 1.172691106s: waiting for machine to come up
	I0920 22:20:20.464050   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:20.464487   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:20.464509   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:20.464443   66497 retry.go:31] will retry after 1.205371228s: waiting for machine to come up
	I0920 22:20:17.502934   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:18.003595   64543 node_ready.go:49] node "embed-certs-562750" has status "Ready":"True"
	I0920 22:20:18.003623   64543 node_ready.go:38] duration metric: took 7.005667163s for node "embed-certs-562750" to be "Ready" ...
	I0920 22:20:18.003635   64543 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:18.018682   64543 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:20.027253   64543 pod_ready.go:103] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:20.404573   64930 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127218766s)
	I0920 22:20:20.404602   64930 crio.go:469] duration metric: took 2.127314313s to extract the tarball
	I0920 22:20:20.404613   64930 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:20.443060   64930 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:20.486174   64930 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:20:20.486200   64930 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:20:20.486210   64930 kubeadm.go:934] updating node { 192.168.39.94 8444 v1.31.1 crio true true} ...
	I0920 22:20:20.486327   64930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-799783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:20.486422   64930 ssh_runner.go:195] Run: crio config
	I0920 22:20:20.539117   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:20:20.539141   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:20.539151   64930 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:20.539184   64930 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-799783 NodeName:default-k8s-diff-port-799783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:20:20.539375   64930 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-799783"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:20.539450   64930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:20:20.552286   64930 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:20.552358   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:20.563964   64930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0920 22:20:20.582857   64930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:20.601602   64930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0920 22:20:20.619624   64930 ssh_runner.go:195] Run: grep 192.168.39.94	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:20.623667   64930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:20.636950   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:20.758158   64930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:20.776263   64930 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783 for IP: 192.168.39.94
	I0920 22:20:20.776291   64930 certs.go:194] generating shared ca certs ...
	I0920 22:20:20.776307   64930 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:20.776465   64930 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:20.776529   64930 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:20.776543   64930 certs.go:256] generating profile certs ...
	I0920 22:20:20.776644   64930 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.key
	I0920 22:20:20.776722   64930 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.key.a6c75af2
	I0920 22:20:20.776787   64930 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.key
	I0920 22:20:20.776952   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:20.776999   64930 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:20.777013   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:20.777050   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:20.777083   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:20.777121   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:20.777174   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:20.777796   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:20.829458   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:20.879008   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:20.914899   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:20.945905   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 22:20:20.981144   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:20:21.011815   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:21.037214   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:21.062338   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:21.088458   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:21.115033   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:21.138611   64930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:21.156547   64930 ssh_runner.go:195] Run: openssl version
	I0920 22:20:21.162710   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:21.175725   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.180671   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.180733   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.186897   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:21.199980   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:21.212840   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.217656   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.217710   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.223561   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:21.236627   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:21.249706   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.254627   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.254693   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.260555   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:21.272330   64930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:21.277146   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:21.283637   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:21.289555   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:21.296395   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:21.302879   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:21.309163   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:21.315778   64930 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-799783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:21.315890   64930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:21.315953   64930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:21.358833   64930 cri.go:89] found id: ""
	I0920 22:20:21.358895   64930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:21.370044   64930 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:21.370067   64930 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:21.370110   64930 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:21.380782   64930 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:21.381858   64930 kubeconfig.go:125] found "default-k8s-diff-port-799783" server: "https://192.168.39.94:8444"
	I0920 22:20:21.383864   64930 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:21.394623   64930 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.94
	I0920 22:20:21.394659   64930 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:21.394692   64930 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:21.394781   64930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:21.434763   64930 cri.go:89] found id: ""
	I0920 22:20:21.434837   64930 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:21.452364   64930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:21.463047   64930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:21.463069   64930 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:21.463120   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 22:20:21.473382   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:21.473435   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:21.483948   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 22:20:21.494051   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:21.494116   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:21.504392   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 22:20:21.514513   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:21.514586   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:21.525767   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 22:20:21.535959   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:21.536042   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:21.546544   64930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:21.559961   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:21.700177   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.039526   64930 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.339307973s)
	I0920 22:20:23.039563   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.240821   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.322946   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.419008   64930 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:23.419163   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:23.919775   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:24.419574   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:24.458386   64930 api_server.go:72] duration metric: took 1.039374058s to wait for apiserver process to appear ...
	I0920 22:20:24.458431   64930 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:20:24.458457   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:24.459045   64930 api_server.go:269] stopped: https://192.168.39.94:8444/healthz: Get "https://192.168.39.94:8444/healthz": dial tcp 192.168.39.94:8444: connect: connection refused
	I0920 22:20:24.958794   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:21.671904   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:21.672382   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:21.672412   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:21.672336   66497 retry.go:31] will retry after 1.261816036s: waiting for machine to come up
	I0920 22:20:22.935576   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:22.936165   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:22.936194   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:22.936101   66497 retry.go:31] will retry after 2.160717711s: waiting for machine to come up
	I0920 22:20:25.098788   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:25.099242   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:25.099264   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:25.099201   66497 retry.go:31] will retry after 2.399420445s: waiting for machine to come up
	I0920 22:20:22.764833   64543 pod_ready.go:103] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:25.030167   64543 pod_ready.go:93] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.030195   64543 pod_ready.go:82] duration metric: took 7.011481892s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.030207   64543 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.039905   64543 pod_ready.go:93] pod "etcd-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.039987   64543 pod_ready.go:82] duration metric: took 9.771571ms for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.040012   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.049855   64543 pod_ready.go:93] pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.049880   64543 pod_ready.go:82] duration metric: took 9.849676ms for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.049891   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.056105   64543 pod_ready.go:93] pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.056128   64543 pod_ready.go:82] duration metric: took 6.228132ms for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.056139   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.061206   64543 pod_ready.go:93] pod "kube-proxy-wsrnw" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.061227   64543 pod_ready.go:82] duration metric: took 5.080531ms for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.061239   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.423902   64543 pod_ready.go:93] pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.423928   64543 pod_ready.go:82] duration metric: took 362.681392ms for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.423941   64543 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:27.432293   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:27.551115   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:27.551160   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:27.551182   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:27.582342   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:27.582378   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:27.958626   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:27.964667   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:27.964700   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:28.459321   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:28.463795   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:28.463829   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:28.959398   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:28.965916   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 200:
	ok
	I0920 22:20:28.978561   64930 api_server.go:141] control plane version: v1.31.1
	I0920 22:20:28.978588   64930 api_server.go:131] duration metric: took 4.520149944s to wait for apiserver health ...
	I0920 22:20:28.978597   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:20:28.978612   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:28.980359   64930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:20:28.981786   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:20:28.999040   64930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:20:29.044999   64930 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:20:29.061533   64930 system_pods.go:59] 8 kube-system pods found
	I0920 22:20:29.061578   64930 system_pods.go:61] "coredns-7c65d6cfc9-nl65c" [a5d6f5c6-aeef-4f0e-8a96-dcddc7d261d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:20:29.061592   64930 system_pods.go:61] "etcd-default-k8s-diff-port-799783" [e23bc34a-f559-434e-bc61-30f010efe60b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:20:29.061603   64930 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799783" [acbaa020-f0e3-4e35-a4ab-28de39620eee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:20:29.061610   64930 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799783" [4f08a0c8-9f92-44ba-a0ae-3f61f50bad54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:20:29.061620   64930 system_pods.go:61] "kube-proxy-mqkfs" [8fe12e1f-7fdc-4bfc-8d11-3ef08212c17c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 22:20:29.061633   64930 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799783" [8373aa94-7b16-4f5a-ab26-bf778e76cecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:20:29.061641   64930 system_pods.go:61] "metrics-server-6867b74b74-4xsdw" [663691e2-a038-4ba4-b0ee-d7e1e88b9c05] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:20:29.061652   64930 system_pods.go:61] "storage-provisioner" [95a6ef93-5fdd-465b-8d34-7b97d56af757] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 22:20:29.061662   64930 system_pods.go:74] duration metric: took 16.629791ms to wait for pod list to return data ...
	I0920 22:20:29.061674   64930 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:20:29.066297   64930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:20:29.066329   64930 node_conditions.go:123] node cpu capacity is 2
	I0920 22:20:29.066341   64930 node_conditions.go:105] duration metric: took 4.659117ms to run NodePressure ...
	I0920 22:20:29.066360   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:29.337598   64930 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:20:29.345606   64930 kubeadm.go:739] kubelet initialised
	I0920 22:20:29.345637   64930 kubeadm.go:740] duration metric: took 8.0125ms waiting for restarted kubelet to initialise ...
	I0920 22:20:29.345646   64930 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:29.353710   64930 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.359468   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.359507   64930 pod_ready.go:82] duration metric: took 5.763208ms for pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.359518   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.359529   64930 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.365960   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.365994   64930 pod_ready.go:82] duration metric: took 6.454066ms for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.366006   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.366023   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.372228   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.372258   64930 pod_ready.go:82] duration metric: took 6.225774ms for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.372270   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.372278   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.449937   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.449964   64930 pod_ready.go:82] duration metric: took 77.677417ms for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.449975   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.449981   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mqkfs" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.849933   64930 pod_ready.go:93] pod "kube-proxy-mqkfs" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:29.849957   64930 pod_ready.go:82] duration metric: took 399.969442ms for pod "kube-proxy-mqkfs" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.849971   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:27.500734   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:27.501210   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:27.501250   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:27.501187   66497 retry.go:31] will retry after 2.779302945s: waiting for machine to come up
	I0920 22:20:30.282179   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:30.282663   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:30.282693   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:30.282632   66497 retry.go:31] will retry after 3.57604454s: waiting for machine to come up
	I0920 22:20:29.930268   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:31.930604   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:35.104095   64076 start.go:364] duration metric: took 54.263062742s to acquireMachinesLock for "no-preload-032884"
	I0920 22:20:35.104165   64076 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:20:35.104176   64076 fix.go:54] fixHost starting: 
	I0920 22:20:35.104530   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:35.104559   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:35.121905   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0920 22:20:35.122295   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:35.122754   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:20:35.122775   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:35.123184   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:35.123393   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:35.123553   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:20:35.125092   64076 fix.go:112] recreateIfNeeded on no-preload-032884: state=Stopped err=<nil>
	I0920 22:20:35.125113   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	W0920 22:20:35.125267   64076 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:20:35.127162   64076 out.go:177] * Restarting existing kvm2 VM for "no-preload-032884" ...
	I0920 22:20:31.857961   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:34.358409   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:33.859962   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860480   65164 main.go:141] libmachine: (old-k8s-version-593412) Found IP for machine: 192.168.61.2
	I0920 22:20:33.860510   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has current primary IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860519   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserving static IP address...
	I0920 22:20:33.860912   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.860939   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | skip adding static IP to network mk-old-k8s-version-593412 - found existing host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"}
	I0920 22:20:33.860953   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserved static IP address: 192.168.61.2
	I0920 22:20:33.860970   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting for SSH to be available...
	I0920 22:20:33.860982   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Getting to WaitForSSH function...
	I0920 22:20:33.863316   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863665   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.863690   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863814   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH client type: external
	I0920 22:20:33.863852   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa (-rw-------)
	I0920 22:20:33.863884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:33.863898   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | About to run SSH command:
	I0920 22:20:33.863916   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | exit 0
	I0920 22:20:33.987115   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:33.987448   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetConfigRaw
	I0920 22:20:33.988170   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:33.990787   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991200   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.991232   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991450   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:20:33.991659   65164 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:33.991677   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:33.991880   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:33.994313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994683   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.994715   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994911   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:33.995094   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995194   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995344   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:33.995459   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:33.995641   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:33.995651   65164 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:34.104874   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:34.104900   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105125   65164 buildroot.go:166] provisioning hostname "old-k8s-version-593412"
	I0920 22:20:34.105140   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105271   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.108213   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108564   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.108595   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108754   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.108946   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109088   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109218   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.109384   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.109607   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.109624   65164 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-593412 && echo "old-k8s-version-593412" | sudo tee /etc/hostname
	I0920 22:20:34.236515   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-593412
	
	I0920 22:20:34.236547   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.239440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.239825   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239983   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.240211   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240372   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240534   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.240753   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.240978   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.241002   65164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-593412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-593412/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-593412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:34.364324   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:34.364351   65164 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:34.364401   65164 buildroot.go:174] setting up certificates
	I0920 22:20:34.364413   65164 provision.go:84] configureAuth start
	I0920 22:20:34.364426   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.364641   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:34.367650   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368065   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.368123   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368260   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.371062   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.371470   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371621   65164 provision.go:143] copyHostCerts
	I0920 22:20:34.371684   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:34.371699   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:34.371756   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:34.371859   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:34.371869   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:34.371889   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:34.371941   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:34.371949   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:34.371966   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:34.372024   65164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-593412 san=[127.0.0.1 192.168.61.2 localhost minikube old-k8s-version-593412]
	I0920 22:20:34.456453   65164 provision.go:177] copyRemoteCerts
	I0920 22:20:34.456512   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:34.456538   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.459471   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459810   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.459871   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459996   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.460225   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.460390   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.460538   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.547105   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:34.576231   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 22:20:34.603240   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 22:20:34.629295   65164 provision.go:87] duration metric: took 264.869688ms to configureAuth
	I0920 22:20:34.629320   65164 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:34.629521   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:20:34.629601   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.632577   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633006   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.633035   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633191   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.633391   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633552   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633682   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.633865   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.634029   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.634043   65164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:34.870369   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:34.870393   65164 machine.go:96] duration metric: took 878.72152ms to provisionDockerMachine
	I0920 22:20:34.870404   65164 start.go:293] postStartSetup for "old-k8s-version-593412" (driver="kvm2")
	I0920 22:20:34.870414   65164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:34.870446   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:34.870739   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:34.870770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.873626   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874069   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.874098   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874262   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.874477   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.874607   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.874731   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.958510   65164 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:34.962678   65164 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:34.962700   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:34.962769   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:34.962873   65164 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:34.963010   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:34.973114   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:34.997799   65164 start.go:296] duration metric: took 127.379618ms for postStartSetup
	I0920 22:20:34.997837   65164 fix.go:56] duration metric: took 19.87358027s for fixHost
	I0920 22:20:34.997861   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.000420   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000761   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.000786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000953   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.001142   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001332   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001492   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.001694   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:35.001900   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:35.001914   65164 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:35.103956   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870835.076325315
	
	I0920 22:20:35.103980   65164 fix.go:216] guest clock: 1726870835.076325315
	I0920 22:20:35.103991   65164 fix.go:229] Guest: 2024-09-20 22:20:35.076325315 +0000 UTC Remote: 2024-09-20 22:20:34.997841315 +0000 UTC m=+209.171100575 (delta=78.484ms)
	I0920 22:20:35.104021   65164 fix.go:200] guest clock delta is within tolerance: 78.484ms
	I0920 22:20:35.104031   65164 start.go:83] releasing machines lock for "old-k8s-version-593412", held for 19.979809523s
	I0920 22:20:35.104060   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.104327   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:35.107075   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107426   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.107463   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107619   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108087   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108221   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108323   65164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:35.108377   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.108430   65164 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:35.108455   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.111005   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111107   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111377   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111413   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111456   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111584   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111638   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111760   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111795   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111939   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.111943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.189592   65164 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:35.213527   65164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:35.364081   65164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:35.370551   65164 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:35.370621   65164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:35.388107   65164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:35.388135   65164 start.go:495] detecting cgroup driver to use...
	I0920 22:20:35.388206   65164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:35.410531   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:35.425842   65164 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:35.425902   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:35.441753   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:35.457160   65164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:35.578452   65164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:35.754567   65164 docker.go:233] disabling docker service ...
	I0920 22:20:35.754653   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:35.772241   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:35.788874   65164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:35.911696   65164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:36.050305   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:36.065694   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:36.085619   65164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 22:20:36.085697   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.096527   65164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:36.096599   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.108150   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.121282   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.133335   65164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:36.145549   65164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:36.156370   65164 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:36.156429   65164 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:36.174821   65164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:36.188579   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:36.355272   65164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:36.460222   65164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:36.460280   65164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:36.466413   65164 start.go:563] Will wait 60s for crictl version
	I0920 22:20:36.466470   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:36.476263   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:36.519969   65164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:36.520061   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.551914   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.584210   65164 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 22:20:33.931343   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:36.430359   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:35.128483   64076 main.go:141] libmachine: (no-preload-032884) Calling .Start
	I0920 22:20:35.128679   64076 main.go:141] libmachine: (no-preload-032884) Ensuring networks are active...
	I0920 22:20:35.129407   64076 main.go:141] libmachine: (no-preload-032884) Ensuring network default is active
	I0920 22:20:35.129723   64076 main.go:141] libmachine: (no-preload-032884) Ensuring network mk-no-preload-032884 is active
	I0920 22:20:35.130101   64076 main.go:141] libmachine: (no-preload-032884) Getting domain xml...
	I0920 22:20:35.130878   64076 main.go:141] libmachine: (no-preload-032884) Creating domain...
	I0920 22:20:36.466500   64076 main.go:141] libmachine: (no-preload-032884) Waiting to get IP...
	I0920 22:20:36.467609   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:36.468084   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:36.468162   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:36.468053   66666 retry.go:31] will retry after 296.142224ms: waiting for machine to come up
	I0920 22:20:36.765402   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:36.765985   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:36.766019   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:36.765907   66666 retry.go:31] will retry after 264.44324ms: waiting for machine to come up
	I0920 22:20:37.032449   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.032913   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.032983   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.032871   66666 retry.go:31] will retry after 366.351809ms: waiting for machine to come up
	I0920 22:20:37.401339   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.401912   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.401942   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.401867   66666 retry.go:31] will retry after 447.432413ms: waiting for machine to come up
	I0920 22:20:37.851659   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.852267   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.852292   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.852177   66666 retry.go:31] will retry after 547.737945ms: waiting for machine to come up
	I0920 22:20:36.858146   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:38.863304   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:36.585309   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:36.588169   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588579   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:36.588608   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588830   65164 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:36.593202   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:36.608192   65164 kubeadm.go:883] updating cluster {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:36.608325   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:20:36.608384   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:36.667481   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:36.667551   65164 ssh_runner.go:195] Run: which lz4
	I0920 22:20:36.672136   65164 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:20:36.677043   65164 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:20:36.677075   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 22:20:38.446678   65164 crio.go:462] duration metric: took 1.774616281s to copy over tarball
	I0920 22:20:38.446757   65164 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:38.433307   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:40.932109   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:38.401406   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:38.401845   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:38.401872   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:38.401813   66666 retry.go:31] will retry after 768.751475ms: waiting for machine to come up
	I0920 22:20:39.171763   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:39.172262   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:39.172306   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:39.172196   66666 retry.go:31] will retry after 743.411926ms: waiting for machine to come up
	I0920 22:20:39.917169   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:39.917764   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:39.917788   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:39.917713   66666 retry.go:31] will retry after 962.106927ms: waiting for machine to come up
	I0920 22:20:40.881256   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:40.881826   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:40.881853   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:40.881761   66666 retry.go:31] will retry after 1.153833019s: waiting for machine to come up
	I0920 22:20:42.036986   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:42.037484   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:42.037507   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:42.037435   66666 retry.go:31] will retry after 2.265359794s: waiting for machine to come up
	I0920 22:20:41.358624   64930 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:41.358656   64930 pod_ready.go:82] duration metric: took 11.508676383s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:41.358675   64930 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:43.367487   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:41.624103   65164 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177314792s)
	I0920 22:20:41.624140   65164 crio.go:469] duration metric: took 3.177430277s to extract the tarball
	I0920 22:20:41.624150   65164 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:41.682195   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:41.728925   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:41.728954   65164 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:20:41.729037   65164 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.729082   65164 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 22:20:41.729130   65164 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.729039   65164 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.729053   65164 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.729064   65164 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731145   65164 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.731168   65164 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 22:20:41.731417   65164 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731618   65164 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.731649   65164 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.731764   65164 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.731827   65164 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.731883   65164 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.894612   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.895771   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.903674   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.909467   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.910830   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.926607   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.943477   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 22:20:42.011191   65164 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 22:20:42.011242   65164 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.011318   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.050009   65164 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 22:20:42.050293   65164 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.050375   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.093205   65164 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 22:20:42.093259   65164 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.093317   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106609   65164 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 22:20:42.106652   65164 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.106701   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106747   65164 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 22:20:42.106832   65164 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.106881   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106699   65164 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 22:20:42.106936   65164 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.107028   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117272   65164 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 22:20:42.117381   65164 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 22:20:42.117404   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.117463   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.117496   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.117427   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117331   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.117556   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.117617   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.261915   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.262025   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.261930   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.262132   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.262186   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.262224   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.262299   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.416018   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.416073   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.416134   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.437099   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.437281   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.437389   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.437543   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.498530   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:42.617564   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 22:20:42.617565   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 22:20:42.617665   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.617767   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 22:20:42.629455   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 22:20:42.629553   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 22:20:42.629638   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 22:20:42.769494   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 22:20:42.769571   65164 cache_images.go:92] duration metric: took 1.040600304s to LoadCachedImages
	W0920 22:20:42.769670   65164 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0920 22:20:42.769690   65164 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.20.0 crio true true} ...
	I0920 22:20:42.769834   65164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-593412 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:42.769930   65164 ssh_runner.go:195] Run: crio config
	I0920 22:20:42.826657   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:20:42.826684   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:42.826695   65164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:42.826713   65164 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-593412 NodeName:old-k8s-version-593412 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 22:20:42.826912   65164 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-593412"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:42.827017   65164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 22:20:42.836906   65164 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:42.836972   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:42.846776   65164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0920 22:20:42.867946   65164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:42.888646   65164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0920 22:20:42.908587   65164 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:42.913022   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:42.927492   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:43.074355   65164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:43.093712   65164 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412 for IP: 192.168.61.2
	I0920 22:20:43.093747   65164 certs.go:194] generating shared ca certs ...
	I0920 22:20:43.093768   65164 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.093938   65164 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:43.094003   65164 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:43.094016   65164 certs.go:256] generating profile certs ...
	I0920 22:20:43.094116   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.key
	I0920 22:20:43.094174   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key.2074c1da
	I0920 22:20:43.094287   65164 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key
	I0920 22:20:43.094440   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:43.094482   65164 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:43.094503   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:43.094532   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:43.094556   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:43.094575   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:43.094615   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:43.095441   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:43.139315   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:43.184809   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:43.224959   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:43.261692   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 22:20:43.304910   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:20:43.350211   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:43.383294   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:43.419400   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:43.450334   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:43.483749   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:43.514957   65164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:43.539734   65164 ssh_runner.go:195] Run: openssl version
	I0920 22:20:43.548836   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:43.564773   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571841   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571904   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.580432   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:43.596782   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:43.613747   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620172   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620241   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.628316   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:43.644084   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:43.656062   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661620   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661687   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.668857   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:43.681747   65164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:43.686842   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:43.694119   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:43.701308   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:43.710152   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:43.718080   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:43.725573   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:43.733563   65164 kubeadm.go:392] StartCluster: {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:43.733676   65164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:43.733753   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.779321   65164 cri.go:89] found id: ""
	I0920 22:20:43.779414   65164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:43.792290   65164 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:43.792308   65164 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:43.792354   65164 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:43.804754   65164 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:43.805793   65164 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-593412" does not appear in /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:20:43.807347   65164 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-9614/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-593412" cluster setting kubeconfig missing "old-k8s-version-593412" context setting]
	I0920 22:20:43.809671   65164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.812323   65164 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:43.824013   65164 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I0920 22:20:43.824057   65164 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:43.824072   65164 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:43.824154   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.866570   65164 cri.go:89] found id: ""
	I0920 22:20:43.866647   65164 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:43.885113   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:43.895941   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:43.895964   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:43.896070   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:20:43.906350   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:43.906427   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:43.917506   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:20:43.927802   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:43.927873   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:43.938454   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.949394   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:43.949459   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.960239   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:20:43.971275   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:43.971343   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:43.981773   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:43.992415   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:44.128979   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.239452   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.110430813s)
	I0920 22:20:45.239495   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.508589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.628021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.731122   65164 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:45.731219   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:42.960716   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:45.431334   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:47.431981   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:44.306025   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:44.306388   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:44.306418   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:44.306362   66666 retry.go:31] will retry after 1.811972294s: waiting for machine to come up
	I0920 22:20:46.119688   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:46.120257   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:46.120281   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:46.120206   66666 retry.go:31] will retry after 2.353824362s: waiting for machine to come up
	I0920 22:20:45.865875   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:48.366477   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:46.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:46.732091   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.232177   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.731967   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.231957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.731560   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.232211   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.731851   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.732174   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.930717   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:51.931624   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:48.476137   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:48.476630   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:48.476659   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:48.476585   66666 retry.go:31] will retry after 3.920442887s: waiting for machine to come up
	I0920 22:20:52.400058   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:52.400449   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:52.400471   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:52.400426   66666 retry.go:31] will retry after 3.798500794s: waiting for machine to come up
	I0920 22:20:50.865475   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:53.366485   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:51.231352   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:51.731345   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.231745   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.731622   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.231749   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.231457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.732036   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.232322   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.732142   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.430801   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.931991   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.200060   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.200518   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has current primary IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.200541   64076 main.go:141] libmachine: (no-preload-032884) Found IP for machine: 192.168.72.19
	I0920 22:20:56.200550   64076 main.go:141] libmachine: (no-preload-032884) Reserving static IP address...
	I0920 22:20:56.200939   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "no-preload-032884", mac: "52:54:00:a6:6f:3c", ip: "192.168.72.19"} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.200983   64076 main.go:141] libmachine: (no-preload-032884) DBG | skip adding static IP to network mk-no-preload-032884 - found existing host DHCP lease matching {name: "no-preload-032884", mac: "52:54:00:a6:6f:3c", ip: "192.168.72.19"}
	I0920 22:20:56.200999   64076 main.go:141] libmachine: (no-preload-032884) Reserved static IP address: 192.168.72.19
	I0920 22:20:56.201014   64076 main.go:141] libmachine: (no-preload-032884) Waiting for SSH to be available...
	I0920 22:20:56.201042   64076 main.go:141] libmachine: (no-preload-032884) DBG | Getting to WaitForSSH function...
	I0920 22:20:56.203351   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.203663   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.203695   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.203770   64076 main.go:141] libmachine: (no-preload-032884) DBG | Using SSH client type: external
	I0920 22:20:56.203790   64076 main.go:141] libmachine: (no-preload-032884) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa (-rw-------)
	I0920 22:20:56.203820   64076 main.go:141] libmachine: (no-preload-032884) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:56.203830   64076 main.go:141] libmachine: (no-preload-032884) DBG | About to run SSH command:
	I0920 22:20:56.203844   64076 main.go:141] libmachine: (no-preload-032884) DBG | exit 0
	I0920 22:20:56.331498   64076 main.go:141] libmachine: (no-preload-032884) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:56.331905   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetConfigRaw
	I0920 22:20:56.333047   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:56.335830   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.336198   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.336230   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.336462   64076 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/config.json ...
	I0920 22:20:56.336687   64076 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:56.336708   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:56.336901   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.339784   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.340171   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.340198   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.340373   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.340598   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.340754   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.340901   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.341056   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.341274   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.341295   64076 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:56.451538   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:56.451567   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.451783   64076 buildroot.go:166] provisioning hostname "no-preload-032884"
	I0920 22:20:56.451813   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.451994   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.454378   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.454747   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.454777   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.454905   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.455093   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.455216   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.455344   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.455549   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.455768   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.455791   64076 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-032884 && echo "no-preload-032884" | sudo tee /etc/hostname
	I0920 22:20:56.577189   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-032884
	
	I0920 22:20:56.577218   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.580006   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.580396   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.580419   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.580637   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.580821   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.580981   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.581093   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.581236   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.581447   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.581472   64076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-032884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-032884/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-032884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:56.697182   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:56.697211   64076 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:56.697237   64076 buildroot.go:174] setting up certificates
	I0920 22:20:56.697250   64076 provision.go:84] configureAuth start
	I0920 22:20:56.697268   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.697530   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:56.700284   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.700638   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.700669   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.700828   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.703088   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.703460   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.703490   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.703557   64076 provision.go:143] copyHostCerts
	I0920 22:20:56.703638   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:56.703658   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:56.703729   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:56.703846   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:56.703856   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:56.703888   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:56.703976   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:56.703986   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:56.704014   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:56.704100   64076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.no-preload-032884 san=[127.0.0.1 192.168.72.19 localhost minikube no-preload-032884]
	I0920 22:20:56.866984   64076 provision.go:177] copyRemoteCerts
	I0920 22:20:56.867061   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:56.867092   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.869468   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.869727   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.869753   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.869967   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.870146   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.870335   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.870460   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:56.957699   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 22:20:56.983450   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:20:57.010074   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:57.036805   64076 provision.go:87] duration metric: took 339.53481ms to configureAuth
	I0920 22:20:57.036831   64076 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:57.036994   64076 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:57.037101   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.040022   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.040443   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.040474   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.040688   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.040870   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.041024   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.041155   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.041351   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:57.041575   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:57.041597   64076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:57.264274   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:57.264302   64076 machine.go:96] duration metric: took 927.599257ms to provisionDockerMachine
	I0920 22:20:57.264315   64076 start.go:293] postStartSetup for "no-preload-032884" (driver="kvm2")
	I0920 22:20:57.264328   64076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:57.264353   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.264656   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:57.264689   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.267783   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.268229   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.268267   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.268455   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.268639   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.268786   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.268927   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.353832   64076 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:57.358202   64076 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:57.358226   64076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:57.358289   64076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:57.358364   64076 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:57.358445   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:57.368387   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:57.393307   64076 start.go:296] duration metric: took 128.976035ms for postStartSetup
	I0920 22:20:57.393353   64076 fix.go:56] duration metric: took 22.289177312s for fixHost
	I0920 22:20:57.393377   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.396200   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.396566   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.396586   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.396760   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.396953   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.397106   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.397220   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.397357   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:57.397563   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:57.397578   64076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:57.508079   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870857.462716254
	
	I0920 22:20:57.508102   64076 fix.go:216] guest clock: 1726870857.462716254
	I0920 22:20:57.508111   64076 fix.go:229] Guest: 2024-09-20 22:20:57.462716254 +0000 UTC Remote: 2024-09-20 22:20:57.393358015 +0000 UTC m=+359.143310428 (delta=69.358239ms)
	I0920 22:20:57.508136   64076 fix.go:200] guest clock delta is within tolerance: 69.358239ms
	I0920 22:20:57.508143   64076 start.go:83] releasing machines lock for "no-preload-032884", held for 22.404002843s
	I0920 22:20:57.508179   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.508481   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:57.511316   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.511660   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.511685   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.511832   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512291   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512438   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512499   64076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:57.512558   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.512687   64076 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:57.512714   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.515494   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515674   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515884   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.515921   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515962   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.516002   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.516189   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.516287   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.516370   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.516459   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.516509   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.516578   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.516639   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.516667   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.596100   64076 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:57.621437   64076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:57.765920   64076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:57.772259   64076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:57.772330   64076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:57.790230   64076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:57.790257   64076 start.go:495] detecting cgroup driver to use...
	I0920 22:20:57.790341   64076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:57.808916   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:57.824120   64076 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:57.824175   64076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:57.838655   64076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:57.853621   64076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:57.966670   64076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:58.108386   64076 docker.go:233] disabling docker service ...
	I0920 22:20:58.108456   64076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:58.124306   64076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:58.138341   64076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:58.292771   64076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:58.415026   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:58.431905   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:58.451166   64076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:20:58.451227   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.462234   64076 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:58.462319   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.473548   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.484592   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.495498   64076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:58.506876   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.517776   64076 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.536223   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.548104   64076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:58.558182   64076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:58.558258   64076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:58.573018   64076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:58.584883   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:58.715520   64076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:58.810495   64076 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:58.810565   64076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:58.815745   64076 start.go:563] Will wait 60s for crictl version
	I0920 22:20:58.815808   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:58.819904   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:58.866227   64076 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:58.866299   64076 ssh_runner.go:195] Run: crio --version
	I0920 22:20:58.895897   64076 ssh_runner.go:195] Run: crio --version
	I0920 22:20:58.930268   64076 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:20:55.865946   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:58.365557   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.231551   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:56.732184   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.231519   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.732306   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.231766   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.232250   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.731295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.732178   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.431393   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:01.930337   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:58.931424   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:58.934010   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:58.934261   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:58.934296   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:58.934479   64076 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:58.938795   64076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:58.952249   64076 kubeadm.go:883] updating cluster {Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:58.952359   64076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:20:58.952390   64076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:58.990874   64076 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:20:58.990895   64076 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:20:58.991004   64076 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:58.991004   64076 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:58.991042   64076 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:58.991051   64076 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 22:20:58.991027   64076 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:58.991070   64076 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:58.991094   64076 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:58.991320   64076 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:58.992446   64076 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:58.992445   64076 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:58.992529   64076 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:58.992537   64076 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:58.992888   64076 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.160630   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.162022   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.175945   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.176874   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.206536   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.221391   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.233905   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 22:20:59.279521   64076 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 22:20:59.279570   64076 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.279592   64076 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 22:20:59.279618   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.279624   64076 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.279667   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.320755   64076 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 22:20:59.320802   64076 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.320850   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.321041   64076 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 22:20:59.321076   64076 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.321121   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.339294   64076 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 22:20:59.339346   64076 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.339406   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.368217   64076 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 22:20:59.368271   64076 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.368321   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.462251   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.462299   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.462353   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.462395   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.462450   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.462477   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.603415   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.603435   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.603559   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.603611   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.603653   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.603690   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.740308   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.740311   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.740391   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.756577   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.756670   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.756711   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.825056   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:59.886185   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 22:20:59.886247   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 22:20:59.886320   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.886335   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:20:59.886200   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 22:20:59.886437   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:20:59.905891   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 22:20:59.905934   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 22:20:59.906012   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:20:59.906032   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 22:20:59.906037   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:20:59.906105   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:20:59.944041   64076 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 22:20:59.944085   64076 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:59.944092   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 22:20:59.944109   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.944115   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 22:20:59.944124   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.944156   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.944192   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 22:20:59.944232   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 22:20:59.944290   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 22:20:59.944305   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 22:21:02.634084   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.689904652s)
	I0920 22:21:02.634115   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 22:21:02.634134   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:21:02.634156   64076 ssh_runner.go:235] Completed: which crictl: (2.690013191s)
	I0920 22:21:02.634170   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:21:02.634206   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:00.365987   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:02.366056   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:04.865138   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:01.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:01.731557   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.232222   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.732161   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.231472   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.732256   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.231599   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.731956   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.231815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.731307   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.931240   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:05.932280   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:04.514463   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880270825s)
	I0920 22:21:04.514501   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 22:21:04.514506   64076 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.880277859s)
	I0920 22:21:04.514634   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:04.514530   64076 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:21:04.514708   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:21:07.903361   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.388614598s)
	I0920 22:21:07.903407   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 22:21:07.903447   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:21:07.903457   64076 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.388799941s)
	I0920 22:21:07.903508   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:21:07.903525   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:07.948935   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 22:21:07.949066   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:06.867117   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:09.365054   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:06.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:06.732185   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.232008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.732202   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.232026   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.732223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.231619   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.232303   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.731940   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.432619   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:10.432916   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:09.377740   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.47420389s)
	I0920 22:21:09.377777   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 22:21:09.377799   64076 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:21:09.377799   64076 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.428713149s)
	I0920 22:21:09.377821   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 22:21:09.377855   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:21:11.449954   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.072068983s)
	I0920 22:21:11.449989   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 22:21:11.450021   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:21:11.450070   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:21:11.366565   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:13.865931   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:11.231381   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:11.731874   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.232048   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.732168   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.232175   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.732162   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.231648   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.732176   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.232078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.732039   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.931619   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:14.931872   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:17.431853   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:13.420242   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.970150355s)
	I0920 22:21:13.420269   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 22:21:13.420292   64076 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:13.420327   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:14.070854   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 22:21:14.070905   64076 cache_images.go:123] Successfully loaded all cached images
	I0920 22:21:14.070914   64076 cache_images.go:92] duration metric: took 15.080004062s to LoadCachedImages
	I0920 22:21:14.070930   64076 kubeadm.go:934] updating node { 192.168.72.19 8443 v1.31.1 crio true true} ...
	I0920 22:21:14.071097   64076 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-032884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:21:14.071168   64076 ssh_runner.go:195] Run: crio config
	I0920 22:21:14.124217   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:21:14.124245   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:21:14.124254   64076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:21:14.124274   64076 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.19 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-032884 NodeName:no-preload-032884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:21:14.124421   64076 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-032884"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:21:14.124480   64076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:21:14.136928   64076 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:21:14.136997   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:21:14.147175   64076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 22:21:14.164546   64076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:21:14.181818   64076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0920 22:21:14.199098   64076 ssh_runner.go:195] Run: grep 192.168.72.19	control-plane.minikube.internal$ /etc/hosts
	I0920 22:21:14.203042   64076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:21:14.216620   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:21:14.344665   64076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:21:14.365604   64076 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884 for IP: 192.168.72.19
	I0920 22:21:14.365629   64076 certs.go:194] generating shared ca certs ...
	I0920 22:21:14.365649   64076 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:21:14.365827   64076 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:21:14.365885   64076 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:21:14.365899   64076 certs.go:256] generating profile certs ...
	I0920 22:21:14.366012   64076 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.key
	I0920 22:21:14.366104   64076 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.key.bbadb61c
	I0920 22:21:14.366162   64076 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.key
	I0920 22:21:14.366353   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:21:14.366400   64076 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:21:14.366414   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:21:14.366450   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:21:14.366489   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:21:14.366519   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:21:14.366575   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:21:14.367591   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:21:14.427412   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:21:14.462466   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:21:14.501761   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:21:14.536382   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 22:21:14.571781   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:21:14.598326   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:21:14.623017   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:21:14.648918   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:21:14.675898   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:21:14.701225   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:21:14.726996   64076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:21:14.748808   64076 ssh_runner.go:195] Run: openssl version
	I0920 22:21:14.755400   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:21:14.768456   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.774027   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.774096   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.780817   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:21:14.796292   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:21:14.810475   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.815697   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.815773   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.822267   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:21:14.834264   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:21:14.846103   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.850719   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.850783   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.856898   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:21:14.868699   64076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:21:14.873403   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:21:14.879636   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:21:14.885839   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:21:14.892035   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:21:14.897881   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:21:14.903783   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:21:14.909649   64076 kubeadm.go:392] StartCluster: {Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:21:14.909737   64076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:21:14.909780   64076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:21:14.953105   64076 cri.go:89] found id: ""
	I0920 22:21:14.953180   64076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:21:14.963403   64076 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:21:14.963423   64076 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:21:14.963471   64076 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:21:14.973488   64076 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:21:14.974547   64076 kubeconfig.go:125] found "no-preload-032884" server: "https://192.168.72.19:8443"
	I0920 22:21:14.976821   64076 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:21:14.986722   64076 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.19
	I0920 22:21:14.986757   64076 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:21:14.986770   64076 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:21:14.986842   64076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:21:15.023234   64076 cri.go:89] found id: ""
	I0920 22:21:15.023305   64076 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:21:15.041045   64076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:21:15.051274   64076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:21:15.051297   64076 kubeadm.go:157] found existing configuration files:
	
	I0920 22:21:15.051341   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:21:15.061647   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:21:15.061697   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:21:15.072071   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:21:15.081989   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:21:15.082043   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:21:15.092302   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:21:15.102314   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:21:15.102372   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:21:15.112160   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:21:15.121675   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:21:15.121733   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:21:15.132459   64076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:21:15.143233   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:15.261579   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:16.912557   64076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.650944403s)
	I0920 22:21:16.912593   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.142147   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.220002   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.345551   64076 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:21:17.345659   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.846789   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.365656   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:18.366203   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:16.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.731672   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.232244   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.731640   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.231400   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.231727   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.731713   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.231984   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.732224   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.346730   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.385972   64076 api_server.go:72] duration metric: took 1.040420208s to wait for apiserver process to appear ...
	I0920 22:21:18.385999   64076 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:21:18.386027   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:18.386479   64076 api_server.go:269] stopped: https://192.168.72.19:8443/healthz: Get "https://192.168.72.19:8443/healthz": dial tcp 192.168.72.19:8443: connect: connection refused
	I0920 22:21:18.886429   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.873610   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.873654   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:20.873666   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.955077   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.955105   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:20.955121   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.968034   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.968062   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:21.386680   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:21.391109   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:21:21.391148   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:21:21.886726   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:21.891765   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:21:21.891802   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:21:22.386330   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:22.394934   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 200:
	ok
	I0920 22:21:22.404568   64076 api_server.go:141] control plane version: v1.31.1
	I0920 22:21:22.404599   64076 api_server.go:131] duration metric: took 4.018587856s to wait for apiserver health ...
	I0920 22:21:22.404608   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:21:22.404614   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:21:22.406538   64076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:21:19.930612   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:21.931883   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:22.408344   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:21:22.424308   64076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:21:22.460962   64076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:21:22.477489   64076 system_pods.go:59] 8 kube-system pods found
	I0920 22:21:22.477609   64076 system_pods.go:61] "coredns-7c65d6cfc9-pdcjb" [9c28694b-a51e-416c-9abc-39e692aa2f28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:21:22.477652   64076 system_pods.go:61] "etcd-no-preload-032884" [ee560f0c-02e1-4694-af7c-64a928e77b4d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:21:22.477685   64076 system_pods.go:61] "kube-apiserver-no-preload-032884" [9861f45c-6e90-4756-b045-2d8e3426dc9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:21:22.477706   64076 system_pods.go:61] "kube-controller-manager-no-preload-032884" [f16002bd-2240-4645-a49b-ef9e8c0588e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:21:22.477727   64076 system_pods.go:61] "kube-proxy-mp98d" [75b05d31-33ee-407f-80dc-360b71c17cba] Running
	I0920 22:21:22.477746   64076 system_pods.go:61] "kube-scheduler-no-preload-032884" [72404a91-c915-4f6e-9dbb-d8df10564d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:21:22.477772   64076 system_pods.go:61] "metrics-server-6867b74b74-x9qjw" [ae39ba1f-b2b9-42dc-9f7c-ac44997798aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:21:22.477792   64076 system_pods.go:61] "storage-provisioner" [cbd15398-a74b-4010-8fb1-f90e13d4f682] Running
	I0920 22:21:22.477811   64076 system_pods.go:74] duration metric: took 16.828942ms to wait for pod list to return data ...
	I0920 22:21:22.477835   64076 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:21:22.486981   64076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:21:22.487019   64076 node_conditions.go:123] node cpu capacity is 2
	I0920 22:21:22.487033   64076 node_conditions.go:105] duration metric: took 9.17624ms to run NodePressure ...
	I0920 22:21:22.487056   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:22.780784   64076 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:21:22.785269   64076 kubeadm.go:739] kubelet initialised
	I0920 22:21:22.785300   64076 kubeadm.go:740] duration metric: took 4.478578ms waiting for restarted kubelet to initialise ...
	I0920 22:21:22.785310   64076 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:21:22.800811   64076 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.810781   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.810819   64076 pod_ready.go:82] duration metric: took 9.97275ms for pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.810832   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.810841   64076 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.821154   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "etcd-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.821183   64076 pod_ready.go:82] duration metric: took 10.330752ms for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.821194   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "etcd-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.821202   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.827310   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "kube-apiserver-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.827352   64076 pod_ready.go:82] duration metric: took 6.141087ms for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.827365   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "kube-apiserver-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.827374   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.870394   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.870419   64076 pod_ready.go:82] duration metric: took 43.034455ms for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.870428   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.870434   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mp98d" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:23.264677   64076 pod_ready.go:93] pod "kube-proxy-mp98d" in "kube-system" namespace has status "Ready":"True"
	I0920 22:21:23.264706   64076 pod_ready.go:82] duration metric: took 394.262896ms for pod "kube-proxy-mp98d" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:23.264719   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:20.866321   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:22.866504   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:21.232197   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:21.732280   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.232223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.731989   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.231835   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.731412   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.232159   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.731357   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.731688   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.432658   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:26.930043   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:25.272790   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:27.772961   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:25.366481   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:27.866181   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:26.232179   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:26.731338   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.232243   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.731637   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.231903   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.732246   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.231673   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.732195   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.231646   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.731750   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.930324   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:30.930675   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:29.775001   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:31.771168   64076 pod_ready.go:93] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:21:31.771198   64076 pod_ready.go:82] duration metric: took 8.506469796s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:31.771210   64076 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:30.365604   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:32.366379   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:34.865420   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:31.232139   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:31.732146   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.231399   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.231350   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.731518   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.231301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.731329   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.231639   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.731944   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.430342   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:35.430373   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:37.430529   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:33.779291   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.277659   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:38.278264   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.865728   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:39.364784   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.231341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:36.732138   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.732107   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.232187   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.231592   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.732193   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.232273   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.732121   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.931150   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:42.430690   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:40.778518   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:43.276961   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:41.368031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:43.865439   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:41.231524   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:41.731282   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.232204   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.732205   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.232173   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.732209   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.231528   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.732008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.231572   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.731279   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:45.731374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:45.771664   65164 cri.go:89] found id: ""
	I0920 22:21:45.771685   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.771692   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:45.771697   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:45.771753   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:45.805953   65164 cri.go:89] found id: ""
	I0920 22:21:45.805983   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.805994   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:45.806001   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:45.806070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:45.841111   65164 cri.go:89] found id: ""
	I0920 22:21:45.841140   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.841152   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:45.841159   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:45.841217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:44.430726   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:46.431384   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.277558   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:47.277869   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.865916   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:48.366375   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.877729   65164 cri.go:89] found id: ""
	I0920 22:21:45.877750   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.877758   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:45.877764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:45.877814   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:45.919400   65164 cri.go:89] found id: ""
	I0920 22:21:45.919429   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.919440   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:45.919447   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:45.919504   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:45.957169   65164 cri.go:89] found id: ""
	I0920 22:21:45.957199   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.957220   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:45.957233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:45.957289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:46.001079   65164 cri.go:89] found id: ""
	I0920 22:21:46.001110   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.001121   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:46.001128   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:46.001194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:46.037774   65164 cri.go:89] found id: ""
	I0920 22:21:46.037806   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.037818   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:46.037830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:46.037843   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:46.079990   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:46.080016   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:46.129685   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:46.129718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:46.144045   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:46.144078   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:46.282272   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:46.282303   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:46.282318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:48.855633   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:48.871768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:48.871829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:48.911870   65164 cri.go:89] found id: ""
	I0920 22:21:48.911901   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.911913   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:48.911923   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:48.911991   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:48.950456   65164 cri.go:89] found id: ""
	I0920 22:21:48.950481   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.950492   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:48.950500   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:48.950562   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:48.988787   65164 cri.go:89] found id: ""
	I0920 22:21:48.988817   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.988827   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:48.988835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:48.988883   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:49.028550   65164 cri.go:89] found id: ""
	I0920 22:21:49.028580   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.028590   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:49.028600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:49.028659   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:49.066926   65164 cri.go:89] found id: ""
	I0920 22:21:49.066951   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.066977   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:49.066986   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:49.067047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:49.106777   65164 cri.go:89] found id: ""
	I0920 22:21:49.106807   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.106816   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:49.106822   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:49.106879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:49.143823   65164 cri.go:89] found id: ""
	I0920 22:21:49.143856   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.143867   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:49.143875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:49.143935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:49.181471   65164 cri.go:89] found id: ""
	I0920 22:21:49.181496   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.181504   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:49.181512   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:49.181523   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:49.266480   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:49.266517   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:49.326247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:49.326273   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:49.387982   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:49.388022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:49.404125   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:49.404157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:49.485426   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:48.932370   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.430552   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:49.278734   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.777609   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:50.865089   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:53.365218   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.985703   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:51.999662   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:51.999723   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:52.035758   65164 cri.go:89] found id: ""
	I0920 22:21:52.035790   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.035800   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:52.035808   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:52.035865   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:52.082846   65164 cri.go:89] found id: ""
	I0920 22:21:52.082869   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.082877   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:52.082883   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:52.082954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:52.119258   65164 cri.go:89] found id: ""
	I0920 22:21:52.119299   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.119311   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:52.119319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:52.119380   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:52.155609   65164 cri.go:89] found id: ""
	I0920 22:21:52.155635   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.155645   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:52.155652   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:52.155710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:52.191493   65164 cri.go:89] found id: ""
	I0920 22:21:52.191519   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.191527   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:52.191534   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:52.191608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:52.227088   65164 cri.go:89] found id: ""
	I0920 22:21:52.227121   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.227131   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:52.227138   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:52.227204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:52.264503   65164 cri.go:89] found id: ""
	I0920 22:21:52.264529   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.264538   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:52.264545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:52.264611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:52.302158   65164 cri.go:89] found id: ""
	I0920 22:21:52.302185   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.302193   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:52.302202   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:52.302217   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:52.356130   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:52.356168   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:52.373072   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:52.373096   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:52.446415   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:52.446437   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:52.446454   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:52.527964   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:52.528009   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.077589   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:55.091560   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:55.091615   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:55.127894   65164 cri.go:89] found id: ""
	I0920 22:21:55.127922   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.127949   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:55.127971   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:55.128030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:55.162522   65164 cri.go:89] found id: ""
	I0920 22:21:55.162546   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.162554   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:55.162562   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:55.162617   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:55.199445   65164 cri.go:89] found id: ""
	I0920 22:21:55.199470   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.199480   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:55.199488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:55.199546   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:55.239316   65164 cri.go:89] found id: ""
	I0920 22:21:55.239340   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.239350   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:55.239357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:55.239418   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:55.278165   65164 cri.go:89] found id: ""
	I0920 22:21:55.278194   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.278204   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:55.278212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:55.278270   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:55.313143   65164 cri.go:89] found id: ""
	I0920 22:21:55.313173   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.313184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:55.313192   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:55.313261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:55.352661   65164 cri.go:89] found id: ""
	I0920 22:21:55.352691   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.352699   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:55.352704   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:55.352751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:55.390108   65164 cri.go:89] found id: ""
	I0920 22:21:55.390147   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.390158   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:55.390169   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:55.390184   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:55.403652   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:55.403680   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:55.490456   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:55.490477   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:55.490488   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:55.573164   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:55.573202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.615158   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:55.615188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:53.430808   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.431168   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:57.431504   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:53.778433   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.778670   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:58.278404   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.365277   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:57.365842   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:59.365976   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:58.165907   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:58.180329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:58.180397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:58.215920   65164 cri.go:89] found id: ""
	I0920 22:21:58.215946   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.215954   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:58.215973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:58.216030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:58.253178   65164 cri.go:89] found id: ""
	I0920 22:21:58.253211   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.253222   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:58.253229   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:58.253287   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:58.289345   65164 cri.go:89] found id: ""
	I0920 22:21:58.289373   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.289384   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:58.289391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:58.289451   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:58.326430   65164 cri.go:89] found id: ""
	I0920 22:21:58.326460   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.326468   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:58.326476   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:58.326535   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:58.362723   65164 cri.go:89] found id: ""
	I0920 22:21:58.362749   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.362761   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:58.362769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:58.362824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:58.398827   65164 cri.go:89] found id: ""
	I0920 22:21:58.398857   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.398867   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:58.398875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:58.398928   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:58.444833   65164 cri.go:89] found id: ""
	I0920 22:21:58.444864   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.444875   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:58.444883   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:58.444948   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:58.482692   65164 cri.go:89] found id: ""
	I0920 22:21:58.482723   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.482735   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:58.482745   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:58.482756   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:58.567138   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:58.567172   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:58.606392   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:58.606421   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:58.658568   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:58.658605   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:58.672830   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:58.672855   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:58.742602   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:59.930574   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:02.431132   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:00.279246   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:02.778434   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:01.865439   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:04.366915   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:01.243532   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:01.257408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:01.257493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:01.295259   65164 cri.go:89] found id: ""
	I0920 22:22:01.295284   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.295292   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:01.295297   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:01.295348   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:01.334119   65164 cri.go:89] found id: ""
	I0920 22:22:01.334145   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.334157   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:01.334164   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:01.334222   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:01.376955   65164 cri.go:89] found id: ""
	I0920 22:22:01.376985   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.376997   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:01.377005   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:01.377063   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:01.416680   65164 cri.go:89] found id: ""
	I0920 22:22:01.416703   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.416712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:01.416718   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:01.416783   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:01.455042   65164 cri.go:89] found id: ""
	I0920 22:22:01.455064   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.455072   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:01.455077   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:01.455135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:01.499243   65164 cri.go:89] found id: ""
	I0920 22:22:01.499271   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.499283   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:01.499290   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:01.499347   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:01.535757   65164 cri.go:89] found id: ""
	I0920 22:22:01.535793   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.535804   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:01.535812   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:01.535876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:01.574409   65164 cri.go:89] found id: ""
	I0920 22:22:01.574447   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.574459   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:01.574470   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:01.574485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:01.615629   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:01.615659   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:01.668446   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:01.668481   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:01.683044   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:01.683071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:01.751190   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:01.751210   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:01.751221   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.335881   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:04.351092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:04.351149   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:04.386306   65164 cri.go:89] found id: ""
	I0920 22:22:04.386337   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.386347   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:04.386365   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:04.386424   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:04.419440   65164 cri.go:89] found id: ""
	I0920 22:22:04.419472   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.419502   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:04.419511   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:04.419571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:04.456005   65164 cri.go:89] found id: ""
	I0920 22:22:04.456043   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.456054   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:04.456061   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:04.456125   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:04.490878   65164 cri.go:89] found id: ""
	I0920 22:22:04.490909   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.490919   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:04.490927   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:04.491000   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:04.527312   65164 cri.go:89] found id: ""
	I0920 22:22:04.527342   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.527353   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:04.527361   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:04.527441   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:04.564441   65164 cri.go:89] found id: ""
	I0920 22:22:04.564471   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.564481   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:04.564487   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:04.564542   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:04.598843   65164 cri.go:89] found id: ""
	I0920 22:22:04.598870   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.598878   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:04.598884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:04.598946   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:04.632575   65164 cri.go:89] found id: ""
	I0920 22:22:04.632604   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.632615   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:04.632625   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:04.632640   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:04.683629   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:04.683668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:04.697138   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:04.697169   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:04.768051   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:04.768076   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:04.768087   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.848054   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:04.848102   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:04.930934   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.431086   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:05.279145   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.778351   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:06.865475   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:08.865586   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.396960   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:07.410612   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:07.410681   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:07.450652   65164 cri.go:89] found id: ""
	I0920 22:22:07.450680   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.450688   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:07.450694   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:07.450739   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:07.491547   65164 cri.go:89] found id: ""
	I0920 22:22:07.491577   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.491588   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:07.491595   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:07.491652   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:07.532025   65164 cri.go:89] found id: ""
	I0920 22:22:07.532060   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.532069   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:07.532076   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:07.532135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:07.574139   65164 cri.go:89] found id: ""
	I0920 22:22:07.574175   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.574186   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:07.574194   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:07.574257   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:07.614572   65164 cri.go:89] found id: ""
	I0920 22:22:07.614599   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.614610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:07.614618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:07.614679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:07.651975   65164 cri.go:89] found id: ""
	I0920 22:22:07.652012   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.652022   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:07.652029   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:07.652091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:07.690273   65164 cri.go:89] found id: ""
	I0920 22:22:07.690300   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.690311   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:07.690319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:07.690379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:07.727144   65164 cri.go:89] found id: ""
	I0920 22:22:07.727169   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.727179   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:07.727189   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:07.727204   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:07.784607   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:07.784645   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:07.800082   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:07.800113   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:07.872842   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:07.872870   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:07.872885   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:07.953169   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:07.953205   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:10.496094   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:10.509645   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:10.509711   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:10.544590   65164 cri.go:89] found id: ""
	I0920 22:22:10.544620   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.544628   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:10.544634   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:10.544690   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:10.581062   65164 cri.go:89] found id: ""
	I0920 22:22:10.581091   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.581100   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:10.581106   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:10.581166   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:10.616337   65164 cri.go:89] found id: ""
	I0920 22:22:10.616363   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.616372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:10.616380   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:10.616438   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:10.655747   65164 cri.go:89] found id: ""
	I0920 22:22:10.655769   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.655777   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:10.655784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:10.655832   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:10.689158   65164 cri.go:89] found id: ""
	I0920 22:22:10.689188   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.689199   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:10.689207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:10.689269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:10.724321   65164 cri.go:89] found id: ""
	I0920 22:22:10.724355   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.724369   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:10.724376   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:10.724447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:10.764105   65164 cri.go:89] found id: ""
	I0920 22:22:10.764134   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.764143   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:10.764148   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:10.764217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:10.802620   65164 cri.go:89] found id: ""
	I0920 22:22:10.802650   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.802664   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:10.802674   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:10.802687   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:10.854739   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:10.854773   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:09.931917   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:12.430948   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.279412   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:12.778166   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.866822   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:13.366013   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.869974   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:10.869996   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:10.945364   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:10.945386   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:10.945399   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:11.029379   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:11.029419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:13.576924   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:13.591641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:13.591716   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:13.626683   65164 cri.go:89] found id: ""
	I0920 22:22:13.626710   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.626718   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:13.626724   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:13.626770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:13.666368   65164 cri.go:89] found id: ""
	I0920 22:22:13.666400   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.666408   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:13.666413   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:13.666469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:13.702596   65164 cri.go:89] found id: ""
	I0920 22:22:13.702618   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.702634   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:13.702642   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:13.702706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:13.736359   65164 cri.go:89] found id: ""
	I0920 22:22:13.736382   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.736390   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:13.736396   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:13.736446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:13.770584   65164 cri.go:89] found id: ""
	I0920 22:22:13.770615   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.770626   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:13.770633   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:13.770693   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:13.808637   65164 cri.go:89] found id: ""
	I0920 22:22:13.808668   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.808680   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:13.808688   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:13.808748   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:13.844429   65164 cri.go:89] found id: ""
	I0920 22:22:13.844476   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.844489   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:13.844497   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:13.844563   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:13.882730   65164 cri.go:89] found id: ""
	I0920 22:22:13.882759   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.882770   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:13.882778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:13.882790   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:13.939998   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:13.940028   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:13.954014   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:13.954043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:14.031991   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:14.032016   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:14.032030   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:14.109378   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:14.109413   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:14.431794   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:16.932980   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:15.277777   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:17.779835   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:15.865687   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:17.865735   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:19.866465   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:16.646761   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:16.660458   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:16.660522   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:16.695512   65164 cri.go:89] found id: ""
	I0920 22:22:16.695537   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.695545   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:16.695552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:16.695612   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:16.740479   65164 cri.go:89] found id: ""
	I0920 22:22:16.740508   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.740518   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:16.740529   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:16.740589   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:16.777259   65164 cri.go:89] found id: ""
	I0920 22:22:16.777291   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.777302   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:16.777309   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:16.777355   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:16.833157   65164 cri.go:89] found id: ""
	I0920 22:22:16.833187   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.833204   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:16.833212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:16.833272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:16.868589   65164 cri.go:89] found id: ""
	I0920 22:22:16.868618   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.868627   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:16.868635   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:16.868694   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:16.907229   65164 cri.go:89] found id: ""
	I0920 22:22:16.907257   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.907269   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:16.907277   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:16.907336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:16.945317   65164 cri.go:89] found id: ""
	I0920 22:22:16.945343   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.945351   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:16.945357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:16.945413   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:16.982159   65164 cri.go:89] found id: ""
	I0920 22:22:16.982183   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.982197   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:16.982206   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:16.982218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:17.034369   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:17.034416   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:17.049644   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:17.049677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:17.127816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:17.127840   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:17.127852   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:17.217213   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:17.217248   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:19.766957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:19.781545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:19.781627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:19.819616   65164 cri.go:89] found id: ""
	I0920 22:22:19.819642   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.819650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:19.819656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:19.819717   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:19.860983   65164 cri.go:89] found id: ""
	I0920 22:22:19.861009   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.861020   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:19.861027   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:19.861083   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:19.904051   65164 cri.go:89] found id: ""
	I0920 22:22:19.904075   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.904083   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:19.904088   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:19.904153   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:19.940246   65164 cri.go:89] found id: ""
	I0920 22:22:19.940273   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.940280   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:19.940286   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:19.940332   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:19.980380   65164 cri.go:89] found id: ""
	I0920 22:22:19.980404   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.980412   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:19.980418   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:19.980496   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:20.019603   65164 cri.go:89] found id: ""
	I0920 22:22:20.019625   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.019633   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:20.019639   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:20.019689   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:20.060667   65164 cri.go:89] found id: ""
	I0920 22:22:20.060696   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.060704   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:20.060709   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:20.060766   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:20.104432   65164 cri.go:89] found id: ""
	I0920 22:22:20.104464   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.104475   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:20.104486   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:20.104500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:20.148433   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:20.148466   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:20.202370   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:20.202407   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:20.216482   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:20.216515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:20.297053   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:20.297071   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:20.297086   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:18.936098   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:21.430878   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:20.277726   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.777347   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.366027   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:24.366241   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.879569   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:22.893624   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:22.893683   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:22.937850   65164 cri.go:89] found id: ""
	I0920 22:22:22.937878   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.937886   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:22.937896   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:22.937944   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:22.974939   65164 cri.go:89] found id: ""
	I0920 22:22:22.974971   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.974983   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:22.974990   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:22.975057   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:23.013689   65164 cri.go:89] found id: ""
	I0920 22:22:23.013723   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.013735   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:23.013742   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:23.013802   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:23.052092   65164 cri.go:89] found id: ""
	I0920 22:22:23.052123   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.052134   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:23.052141   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:23.052201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:23.092204   65164 cri.go:89] found id: ""
	I0920 22:22:23.092231   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.092248   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:23.092255   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:23.092317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:23.146265   65164 cri.go:89] found id: ""
	I0920 22:22:23.146294   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.146304   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:23.146312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:23.146373   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:23.183802   65164 cri.go:89] found id: ""
	I0920 22:22:23.183832   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.183843   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:23.183865   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:23.183932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:23.219604   65164 cri.go:89] found id: ""
	I0920 22:22:23.219633   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.219643   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:23.219654   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:23.219671   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:23.268138   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:23.268171   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:23.285309   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:23.285343   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:23.362940   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:23.362985   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:23.363006   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:23.450369   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:23.450411   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:23.431589   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:25.930565   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:24.778456   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:27.278474   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:26.866577   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:29.364505   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:26.001038   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:26.016893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:26.016956   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:26.053611   65164 cri.go:89] found id: ""
	I0920 22:22:26.053641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.053653   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:26.053661   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:26.053710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:26.093966   65164 cri.go:89] found id: ""
	I0920 22:22:26.093994   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.094005   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:26.094012   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:26.094077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:26.134321   65164 cri.go:89] found id: ""
	I0920 22:22:26.134360   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.134372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:26.134379   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:26.134442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:26.171597   65164 cri.go:89] found id: ""
	I0920 22:22:26.171641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.171651   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:26.171659   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:26.171730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:26.207751   65164 cri.go:89] found id: ""
	I0920 22:22:26.207778   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.207787   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:26.207793   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:26.207845   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:26.249174   65164 cri.go:89] found id: ""
	I0920 22:22:26.249201   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.249217   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:26.249225   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:26.249286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:26.284627   65164 cri.go:89] found id: ""
	I0920 22:22:26.284656   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.284666   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:26.284673   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:26.284729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:26.323942   65164 cri.go:89] found id: ""
	I0920 22:22:26.323968   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.323980   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:26.323991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:26.324005   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:26.379698   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:26.379737   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:26.394553   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:26.394584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:26.472301   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:26.472328   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:26.472354   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:26.549464   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:26.549509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.091838   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:29.115120   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:29.115198   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:29.161609   65164 cri.go:89] found id: ""
	I0920 22:22:29.161632   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.161640   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:29.161646   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:29.161709   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:29.197886   65164 cri.go:89] found id: ""
	I0920 22:22:29.197912   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.197920   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:29.197926   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:29.197978   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:29.236953   65164 cri.go:89] found id: ""
	I0920 22:22:29.236980   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.236991   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:29.236999   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:29.237059   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:29.273573   65164 cri.go:89] found id: ""
	I0920 22:22:29.273603   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.273614   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:29.273622   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:29.273679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:29.308752   65164 cri.go:89] found id: ""
	I0920 22:22:29.308776   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.308784   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:29.308789   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:29.308847   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:29.348578   65164 cri.go:89] found id: ""
	I0920 22:22:29.348608   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.348620   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:29.348627   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:29.348685   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:29.385184   65164 cri.go:89] found id: ""
	I0920 22:22:29.385225   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.385237   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:29.385246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:29.385308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:29.421935   65164 cri.go:89] found id: ""
	I0920 22:22:29.421968   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.421979   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:29.421990   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:29.422012   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:29.438574   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:29.438606   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:29.515334   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:29.515362   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:29.515376   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:29.602411   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:29.602451   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.646284   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:29.646320   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:27.931022   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:30.430935   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:32.431673   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:29.278850   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:31.778403   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:31.364592   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:33.864487   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:32.196699   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:32.211260   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:32.211338   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:32.248007   65164 cri.go:89] found id: ""
	I0920 22:22:32.248040   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.248052   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:32.248060   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:32.248122   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:32.285451   65164 cri.go:89] found id: ""
	I0920 22:22:32.285481   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.285490   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:32.285496   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:32.285548   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:32.321721   65164 cri.go:89] found id: ""
	I0920 22:22:32.321746   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.321756   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:32.321768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:32.321827   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:32.363669   65164 cri.go:89] found id: ""
	I0920 22:22:32.363698   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.363708   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:32.363717   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:32.363770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:32.402681   65164 cri.go:89] found id: ""
	I0920 22:22:32.402721   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.402729   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:32.402735   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:32.402786   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:32.445648   65164 cri.go:89] found id: ""
	I0920 22:22:32.445674   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.445684   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:32.445691   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:32.445750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:32.482291   65164 cri.go:89] found id: ""
	I0920 22:22:32.482315   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.482330   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:32.482337   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:32.482397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:32.523095   65164 cri.go:89] found id: ""
	I0920 22:22:32.523124   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.523134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:32.523145   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:32.523160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:32.537459   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:32.537489   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:32.614828   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:32.614853   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:32.614867   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:32.695690   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:32.695728   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:32.739620   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:32.739658   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:35.296323   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:35.312341   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:35.312417   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:35.349934   65164 cri.go:89] found id: ""
	I0920 22:22:35.349962   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.349972   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:35.349980   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:35.350052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:35.385427   65164 cri.go:89] found id: ""
	I0920 22:22:35.385457   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.385468   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:35.385475   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:35.385538   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:35.422118   65164 cri.go:89] found id: ""
	I0920 22:22:35.422146   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.422157   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:35.422165   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:35.422230   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:35.458552   65164 cri.go:89] found id: ""
	I0920 22:22:35.458573   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.458580   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:35.458586   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:35.458639   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:35.493575   65164 cri.go:89] found id: ""
	I0920 22:22:35.493601   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.493610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:35.493615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:35.493671   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:35.530316   65164 cri.go:89] found id: ""
	I0920 22:22:35.530342   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.530350   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:35.530356   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:35.530411   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:35.568247   65164 cri.go:89] found id: ""
	I0920 22:22:35.568279   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.568289   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:35.568303   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:35.568367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:35.605207   65164 cri.go:89] found id: ""
	I0920 22:22:35.605234   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.605242   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:35.605255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:35.605267   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:35.619771   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:35.619797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:35.699917   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:35.699939   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:35.699950   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:35.785310   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:35.785346   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:35.849112   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:35.849147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:34.930225   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:36.931840   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:34.277310   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:36.278180   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:35.866985   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.364684   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.405695   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:38.419499   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:38.419580   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:38.461618   65164 cri.go:89] found id: ""
	I0920 22:22:38.461642   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.461650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:38.461656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:38.461713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:38.504837   65164 cri.go:89] found id: ""
	I0920 22:22:38.504859   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.504867   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:38.504872   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:38.504923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:38.539399   65164 cri.go:89] found id: ""
	I0920 22:22:38.539434   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.539446   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:38.539453   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:38.539518   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:38.576129   65164 cri.go:89] found id: ""
	I0920 22:22:38.576155   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.576165   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:38.576173   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:38.576233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:38.611383   65164 cri.go:89] found id: ""
	I0920 22:22:38.611411   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.611423   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:38.611430   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:38.611491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:38.650553   65164 cri.go:89] found id: ""
	I0920 22:22:38.650583   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.650594   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:38.650602   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:38.650668   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:38.686734   65164 cri.go:89] found id: ""
	I0920 22:22:38.686756   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.686764   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:38.686769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:38.686829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:38.726268   65164 cri.go:89] found id: ""
	I0920 22:22:38.726301   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.726312   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:38.726323   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:38.726336   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:38.801995   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:38.802042   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:38.843775   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:38.843811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:38.897874   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:38.897909   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:38.912111   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:38.912136   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:38.989021   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:38.932311   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.432699   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.778194   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.277875   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:43.278108   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:40.364970   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:42.366255   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:44.865678   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.489341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:41.515667   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:41.515724   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:41.567019   65164 cri.go:89] found id: ""
	I0920 22:22:41.567047   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.567057   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:41.567066   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:41.567128   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:41.625316   65164 cri.go:89] found id: ""
	I0920 22:22:41.625347   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.625357   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:41.625371   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:41.625436   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:41.663420   65164 cri.go:89] found id: ""
	I0920 22:22:41.663448   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.663459   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:41.663467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:41.663529   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:41.701639   65164 cri.go:89] found id: ""
	I0920 22:22:41.701670   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.701682   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:41.701689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:41.701751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:41.739740   65164 cri.go:89] found id: ""
	I0920 22:22:41.739768   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.739778   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:41.739786   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:41.739843   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:41.781365   65164 cri.go:89] found id: ""
	I0920 22:22:41.781390   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.781401   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:41.781408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:41.781467   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:41.818708   65164 cri.go:89] found id: ""
	I0920 22:22:41.818732   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.818740   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:41.818746   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:41.818793   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:41.854193   65164 cri.go:89] found id: ""
	I0920 22:22:41.854226   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.854237   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:41.854249   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:41.854266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:41.923678   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:41.923701   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:41.923716   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:42.006009   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:42.006045   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:42.044745   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:42.044777   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:42.099462   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:42.099500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:44.614787   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:44.629100   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:44.629178   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:44.672347   65164 cri.go:89] found id: ""
	I0920 22:22:44.672381   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.672392   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:44.672401   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:44.672525   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:44.717617   65164 cri.go:89] found id: ""
	I0920 22:22:44.717646   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.717657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:44.717664   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:44.717730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:44.753035   65164 cri.go:89] found id: ""
	I0920 22:22:44.753062   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.753073   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:44.753092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:44.753165   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:44.790745   65164 cri.go:89] found id: ""
	I0920 22:22:44.790774   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.790785   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:44.790792   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:44.790859   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:44.828604   65164 cri.go:89] found id: ""
	I0920 22:22:44.828630   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.828640   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:44.828647   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:44.828706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:44.865982   65164 cri.go:89] found id: ""
	I0920 22:22:44.866008   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.866019   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:44.866026   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:44.866077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:44.903473   65164 cri.go:89] found id: ""
	I0920 22:22:44.903499   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.903508   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:44.903514   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:44.903571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:44.943481   65164 cri.go:89] found id: ""
	I0920 22:22:44.943509   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.943521   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:44.943530   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:44.943543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:44.996783   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:44.996823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:45.011531   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:45.011560   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:45.087225   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:45.087252   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:45.087266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:45.166460   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:45.166500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:43.931267   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:46.430497   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:45.778462   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.783057   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.365053   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:49.864478   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.709426   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:47.723507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:47.723590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:47.758203   65164 cri.go:89] found id: ""
	I0920 22:22:47.758228   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.758238   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:47.758246   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:47.758304   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:47.796098   65164 cri.go:89] found id: ""
	I0920 22:22:47.796129   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.796140   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:47.796148   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:47.796210   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:47.831860   65164 cri.go:89] found id: ""
	I0920 22:22:47.831889   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.831899   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:47.831906   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:47.831966   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:47.866241   65164 cri.go:89] found id: ""
	I0920 22:22:47.866266   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.866274   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:47.866281   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:47.866337   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:47.905029   65164 cri.go:89] found id: ""
	I0920 22:22:47.905055   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.905063   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:47.905073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:47.905124   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:47.942290   65164 cri.go:89] found id: ""
	I0920 22:22:47.942322   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.942332   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:47.942340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:47.942400   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:47.977901   65164 cri.go:89] found id: ""
	I0920 22:22:47.977930   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.977939   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:47.977946   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:47.978012   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:48.014682   65164 cri.go:89] found id: ""
	I0920 22:22:48.014715   65164 logs.go:276] 0 containers: []
	W0920 22:22:48.014726   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:48.014736   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:48.014749   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:48.067116   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:48.067160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:48.082926   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:48.082980   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:48.158209   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:48.158231   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:48.158243   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:48.239760   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:48.239866   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:50.788520   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:50.802410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:50.802485   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:50.844263   65164 cri.go:89] found id: ""
	I0920 22:22:50.844291   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.844303   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:50.844311   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:50.844367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:48.931174   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:51.431910   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:50.277510   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:52.279387   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:51.866270   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:54.366161   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:50.880271   65164 cri.go:89] found id: ""
	I0920 22:22:50.880301   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.880311   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:50.880324   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:50.880379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:50.915627   65164 cri.go:89] found id: ""
	I0920 22:22:50.915653   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.915663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:50.915671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:50.915725   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:50.951779   65164 cri.go:89] found id: ""
	I0920 22:22:50.951804   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.951823   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:50.951831   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:50.951879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:50.989174   65164 cri.go:89] found id: ""
	I0920 22:22:50.989201   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.989212   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:50.989219   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:50.989293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:51.024049   65164 cri.go:89] found id: ""
	I0920 22:22:51.024084   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.024094   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:51.024106   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:51.024167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:51.060837   65164 cri.go:89] found id: ""
	I0920 22:22:51.060865   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.060877   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:51.060884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:51.060934   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:51.099248   65164 cri.go:89] found id: ""
	I0920 22:22:51.099272   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.099285   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:51.099295   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:51.099308   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:51.148388   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:51.148420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:51.162735   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:51.162763   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:51.239020   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:51.239048   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:51.239063   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:51.318893   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:51.318934   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:53.860664   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:53.874434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:53.874491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:53.909449   65164 cri.go:89] found id: ""
	I0920 22:22:53.909475   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.909485   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:53.909493   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:53.909550   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:53.944367   65164 cri.go:89] found id: ""
	I0920 22:22:53.944395   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.944406   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:53.944414   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:53.944469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:53.980994   65164 cri.go:89] found id: ""
	I0920 22:22:53.981032   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.981043   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:53.981051   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:53.981110   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:54.018456   65164 cri.go:89] found id: ""
	I0920 22:22:54.018494   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.018506   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:54.018515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:54.018574   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:54.053763   65164 cri.go:89] found id: ""
	I0920 22:22:54.053793   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.053805   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:54.053813   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:54.053871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:54.091847   65164 cri.go:89] found id: ""
	I0920 22:22:54.091871   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.091883   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:54.091890   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:54.091951   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:54.126844   65164 cri.go:89] found id: ""
	I0920 22:22:54.126875   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.126885   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:54.126893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:54.126949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:54.168967   65164 cri.go:89] found id: ""
	I0920 22:22:54.168999   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.169010   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:54.169022   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:54.169043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:54.237771   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:54.237791   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:54.237806   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:54.319629   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:54.319668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:54.364470   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:54.364504   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:54.418798   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:54.418835   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:53.931126   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.430892   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:54.777269   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.777539   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.865494   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:59.365274   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.935104   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:56.948472   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:56.948545   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:56.983338   65164 cri.go:89] found id: ""
	I0920 22:22:56.983365   65164 logs.go:276] 0 containers: []
	W0920 22:22:56.983373   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:56.983379   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:56.983425   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:57.021781   65164 cri.go:89] found id: ""
	I0920 22:22:57.021809   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.021819   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:57.021825   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:57.021871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:57.057191   65164 cri.go:89] found id: ""
	I0920 22:22:57.057231   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.057239   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:57.057246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:57.057294   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:57.092567   65164 cri.go:89] found id: ""
	I0920 22:22:57.092595   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.092607   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:57.092615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:57.092669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:57.128098   65164 cri.go:89] found id: ""
	I0920 22:22:57.128125   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.128136   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:57.128143   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:57.128205   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:57.163001   65164 cri.go:89] found id: ""
	I0920 22:22:57.163029   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.163040   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:57.163048   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:57.163105   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:57.199392   65164 cri.go:89] found id: ""
	I0920 22:22:57.199424   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.199434   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:57.199442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:57.199491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:57.235132   65164 cri.go:89] found id: ""
	I0920 22:22:57.235155   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.235162   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:57.235170   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:57.235181   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:57.287112   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:57.287148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:57.301293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:57.301318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:57.379895   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:57.379920   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:57.379935   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:57.466022   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:57.466059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.006738   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:00.019871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:00.019949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:00.054127   65164 cri.go:89] found id: ""
	I0920 22:23:00.054155   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.054162   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:00.054168   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:00.054228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:00.089755   65164 cri.go:89] found id: ""
	I0920 22:23:00.089788   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.089800   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:00.089807   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:00.089867   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:00.125466   65164 cri.go:89] found id: ""
	I0920 22:23:00.125489   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.125497   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:00.125503   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:00.125559   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:00.166147   65164 cri.go:89] found id: ""
	I0920 22:23:00.166171   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.166178   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:00.166187   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:00.166248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:00.202752   65164 cri.go:89] found id: ""
	I0920 22:23:00.202781   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.202793   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:00.202801   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:00.202862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:00.240727   65164 cri.go:89] found id: ""
	I0920 22:23:00.240757   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.240768   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:00.240776   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:00.240833   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:00.277247   65164 cri.go:89] found id: ""
	I0920 22:23:00.277270   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.277278   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:00.277285   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:00.277341   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:00.313098   65164 cri.go:89] found id: ""
	I0920 22:23:00.313123   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.313134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:00.313144   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:00.313160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:00.328326   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:00.328353   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:00.397929   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:00.397961   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:00.397975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:00.479830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:00.479862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.519715   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:00.519745   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:58.929948   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:00.930012   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:58.778239   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:00.778442   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.279610   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:01.366521   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.366771   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.073360   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:03.088169   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:03.088236   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:03.129646   65164 cri.go:89] found id: ""
	I0920 22:23:03.129673   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.129682   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:03.129690   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:03.129751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:03.163771   65164 cri.go:89] found id: ""
	I0920 22:23:03.163801   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.163812   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:03.163820   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:03.163876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:03.201253   65164 cri.go:89] found id: ""
	I0920 22:23:03.201291   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.201299   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:03.201317   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:03.201374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:03.239623   65164 cri.go:89] found id: ""
	I0920 22:23:03.239652   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.239663   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:03.239670   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:03.239731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:03.275530   65164 cri.go:89] found id: ""
	I0920 22:23:03.275555   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.275564   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:03.275572   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:03.275631   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:03.312812   65164 cri.go:89] found id: ""
	I0920 22:23:03.312845   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.312855   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:03.312863   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:03.312923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:03.352975   65164 cri.go:89] found id: ""
	I0920 22:23:03.352998   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.353009   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:03.353016   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:03.353070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:03.390815   65164 cri.go:89] found id: ""
	I0920 22:23:03.390846   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.390856   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:03.390867   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:03.390880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:03.478231   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:03.478269   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:03.524991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:03.525024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:03.580114   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:03.580159   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:03.596120   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:03.596147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:03.671302   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:02.931281   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.429671   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:07.430177   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.781406   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:08.278876   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.367069   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:07.866058   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:09.866187   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:06.171484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:06.186410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:06.186480   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:06.222102   65164 cri.go:89] found id: ""
	I0920 22:23:06.222131   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.222139   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:06.222148   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:06.222206   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:06.257922   65164 cri.go:89] found id: ""
	I0920 22:23:06.257945   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.257953   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:06.257958   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:06.258006   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:06.299149   65164 cri.go:89] found id: ""
	I0920 22:23:06.299176   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.299184   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:06.299190   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:06.299244   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:06.336667   65164 cri.go:89] found id: ""
	I0920 22:23:06.336693   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.336703   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:06.336710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:06.336772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:06.379397   65164 cri.go:89] found id: ""
	I0920 22:23:06.379425   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.379435   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:06.379442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:06.379505   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:06.415945   65164 cri.go:89] found id: ""
	I0920 22:23:06.415976   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.415987   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:06.415993   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:06.416051   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:06.452902   65164 cri.go:89] found id: ""
	I0920 22:23:06.452932   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.452955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:06.452962   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:06.453033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:06.490284   65164 cri.go:89] found id: ""
	I0920 22:23:06.490314   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.490324   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:06.490334   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:06.490351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:06.529838   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:06.529862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:06.582421   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:06.582461   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:06.596504   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:06.596542   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:06.667816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:06.667836   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:06.667851   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.247952   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:09.264316   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:09.264386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:09.300577   65164 cri.go:89] found id: ""
	I0920 22:23:09.300612   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.300621   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:09.300627   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:09.300674   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:09.345945   65164 cri.go:89] found id: ""
	I0920 22:23:09.345972   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.345982   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:09.345991   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:09.346049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:09.385660   65164 cri.go:89] found id: ""
	I0920 22:23:09.385690   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.385700   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:09.385708   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:09.385775   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:09.422286   65164 cri.go:89] found id: ""
	I0920 22:23:09.422321   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.422332   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:09.422338   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:09.422397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:09.458377   65164 cri.go:89] found id: ""
	I0920 22:23:09.458402   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.458410   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:09.458416   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:09.458477   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:09.499470   65164 cri.go:89] found id: ""
	I0920 22:23:09.499498   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.499509   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:09.499517   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:09.499603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:09.535725   65164 cri.go:89] found id: ""
	I0920 22:23:09.535751   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.535759   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:09.535764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:09.535812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:09.575755   65164 cri.go:89] found id: ""
	I0920 22:23:09.575784   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.575795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:09.575807   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:09.575825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:09.590265   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:09.590294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:09.661559   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:09.661585   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:09.661617   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.741099   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:09.741143   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:09.785560   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:09.785589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:09.431051   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:11.930405   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:10.781647   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:13.278000   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:12.366060   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:14.864332   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:12.344085   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:12.358919   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:12.359021   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:12.397182   65164 cri.go:89] found id: ""
	I0920 22:23:12.397220   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.397229   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:12.397235   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:12.397293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:12.435133   65164 cri.go:89] found id: ""
	I0920 22:23:12.435162   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.435169   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:12.435177   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:12.435237   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:12.474391   65164 cri.go:89] found id: ""
	I0920 22:23:12.474416   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.474427   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:12.474434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:12.474493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:12.511992   65164 cri.go:89] found id: ""
	I0920 22:23:12.512015   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.512022   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:12.512028   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:12.512071   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:12.547248   65164 cri.go:89] found id: ""
	I0920 22:23:12.547269   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.547278   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:12.547289   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:12.547336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:12.582721   65164 cri.go:89] found id: ""
	I0920 22:23:12.582747   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.582759   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:12.582766   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:12.582837   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:12.620184   65164 cri.go:89] found id: ""
	I0920 22:23:12.620207   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.620215   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:12.620220   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:12.620272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:12.657416   65164 cri.go:89] found id: ""
	I0920 22:23:12.657441   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.657449   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:12.657458   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:12.657477   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:12.671189   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:12.671218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:12.742115   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:12.742139   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:12.742150   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:12.817551   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:12.817586   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:12.861940   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:12.861970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:15.413615   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:15.427127   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:15.427201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:15.462329   65164 cri.go:89] found id: ""
	I0920 22:23:15.462357   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.462366   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:15.462373   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:15.462430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:15.508791   65164 cri.go:89] found id: ""
	I0920 22:23:15.508816   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.508824   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:15.508834   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:15.508894   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:15.545381   65164 cri.go:89] found id: ""
	I0920 22:23:15.545403   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.545411   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:15.545417   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:15.545475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:15.585401   65164 cri.go:89] found id: ""
	I0920 22:23:15.585422   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.585430   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:15.585436   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:15.585494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:15.631507   65164 cri.go:89] found id: ""
	I0920 22:23:15.631529   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.631542   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:15.631549   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:15.631608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:15.671219   65164 cri.go:89] found id: ""
	I0920 22:23:15.671247   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.671270   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:15.671278   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:15.671333   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:15.711348   65164 cri.go:89] found id: ""
	I0920 22:23:15.711382   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.711393   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:15.711400   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:15.711463   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:15.750582   65164 cri.go:89] found id: ""
	I0920 22:23:15.750614   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.750624   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:15.750635   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:15.750649   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:15.766294   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:15.766321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:15.839079   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:15.839104   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:15.839117   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:14.430673   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:16.430824   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:15.278107   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:17.778678   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:16.866212   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:18.866838   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:15.920123   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:15.920156   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:15.964292   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:15.964321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.517484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:18.531117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:18.531192   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:18.567351   65164 cri.go:89] found id: ""
	I0920 22:23:18.567377   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.567389   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:18.567397   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:18.567447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:18.603319   65164 cri.go:89] found id: ""
	I0920 22:23:18.603344   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.603353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:18.603358   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:18.603408   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:18.639699   65164 cri.go:89] found id: ""
	I0920 22:23:18.639726   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.639737   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:18.639744   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:18.639806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:18.681880   65164 cri.go:89] found id: ""
	I0920 22:23:18.681910   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.681931   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:18.681939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:18.682002   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:18.716589   65164 cri.go:89] found id: ""
	I0920 22:23:18.716619   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.716629   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:18.716637   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:18.716696   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:18.755280   65164 cri.go:89] found id: ""
	I0920 22:23:18.755310   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.755321   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:18.755329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:18.755390   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:18.791915   65164 cri.go:89] found id: ""
	I0920 22:23:18.791944   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.791955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:18.791963   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:18.792033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:18.831544   65164 cri.go:89] found id: ""
	I0920 22:23:18.831568   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.831576   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:18.831584   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:18.831593   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.882024   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:18.882059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:18.898191   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:18.898226   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:18.970613   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:18.970643   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:18.970657   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:19.046112   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:19.046145   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:18.931618   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.431146   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:20.278885   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:22.777640   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.366031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:23.865349   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.586078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:21.599467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:21.599541   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:21.638708   65164 cri.go:89] found id: ""
	I0920 22:23:21.638733   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.638741   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:21.638747   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:21.638851   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:21.680546   65164 cri.go:89] found id: ""
	I0920 22:23:21.680574   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.680583   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:21.680589   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:21.680643   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:21.724122   65164 cri.go:89] found id: ""
	I0920 22:23:21.724149   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.724160   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:21.724167   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:21.724224   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:21.758447   65164 cri.go:89] found id: ""
	I0920 22:23:21.758474   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.758482   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:21.758488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:21.758554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:21.799562   65164 cri.go:89] found id: ""
	I0920 22:23:21.799586   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.799594   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:21.799600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:21.799658   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:21.836768   65164 cri.go:89] found id: ""
	I0920 22:23:21.836799   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.836808   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:21.836814   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:21.836871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:21.876750   65164 cri.go:89] found id: ""
	I0920 22:23:21.876770   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.876778   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:21.876784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:21.876873   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:21.912756   65164 cri.go:89] found id: ""
	I0920 22:23:21.912783   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.912792   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:21.912800   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:21.912813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:21.990168   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:21.990191   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:21.990202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:22.094757   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:22.094797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:22.152247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:22.152280   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:22.205505   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:22.205543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:24.720784   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:24.734305   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:24.734369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:24.771142   65164 cri.go:89] found id: ""
	I0920 22:23:24.771171   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.771182   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:24.771190   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:24.771248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:24.807182   65164 cri.go:89] found id: ""
	I0920 22:23:24.807208   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.807216   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:24.807221   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:24.807278   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:24.840805   65164 cri.go:89] found id: ""
	I0920 22:23:24.840836   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.840847   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:24.840854   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:24.840910   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:24.877583   65164 cri.go:89] found id: ""
	I0920 22:23:24.877609   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.877617   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:24.877623   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:24.877677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:24.913485   65164 cri.go:89] found id: ""
	I0920 22:23:24.913518   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.913529   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:24.913536   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:24.913594   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:24.949441   65164 cri.go:89] found id: ""
	I0920 22:23:24.949466   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.949475   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:24.949481   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:24.949527   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:24.985355   65164 cri.go:89] found id: ""
	I0920 22:23:24.985377   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.985386   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:24.985391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:24.985442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:25.021343   65164 cri.go:89] found id: ""
	I0920 22:23:25.021367   65164 logs.go:276] 0 containers: []
	W0920 22:23:25.021375   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:25.021383   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:25.021393   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:25.063475   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:25.063509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:25.113397   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:25.113436   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:25.127946   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:25.127981   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:25.202313   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:25.202334   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:25.202349   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:23.431756   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:25.929621   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:24.778308   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.277297   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:25.865413   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.866883   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.784353   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:27.800363   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:27.800439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:27.861988   65164 cri.go:89] found id: ""
	I0920 22:23:27.862016   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.862026   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:27.862034   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:27.862093   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:27.925087   65164 cri.go:89] found id: ""
	I0920 22:23:27.925114   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.925126   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:27.925133   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:27.925194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:27.959757   65164 cri.go:89] found id: ""
	I0920 22:23:27.959785   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.959796   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:27.959804   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:27.959857   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:27.998853   65164 cri.go:89] found id: ""
	I0920 22:23:27.998881   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.998891   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:27.998898   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:27.998955   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:28.034316   65164 cri.go:89] found id: ""
	I0920 22:23:28.034338   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.034347   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:28.034352   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:28.034402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:28.068834   65164 cri.go:89] found id: ""
	I0920 22:23:28.068857   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.068865   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:28.068871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:28.068921   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:28.103599   65164 cri.go:89] found id: ""
	I0920 22:23:28.103631   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.103642   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:28.103649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:28.103713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:28.139211   65164 cri.go:89] found id: ""
	I0920 22:23:28.139235   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.139243   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:28.139250   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:28.139261   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:28.191788   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:28.191825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:28.208453   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:28.208487   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:28.280469   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:28.280488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:28.280500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:28.357234   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:28.357277   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:27.931829   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.430915   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:29.278181   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:31.777506   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.365534   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:32.366025   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:34.864764   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.903301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:30.916515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:30.916595   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:30.950403   65164 cri.go:89] found id: ""
	I0920 22:23:30.950427   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.950435   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:30.950442   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:30.950488   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:30.987361   65164 cri.go:89] found id: ""
	I0920 22:23:30.987383   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.987392   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:30.987398   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:30.987447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:31.025688   65164 cri.go:89] found id: ""
	I0920 22:23:31.025712   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.025720   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:31.025727   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:31.025789   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:31.064227   65164 cri.go:89] found id: ""
	I0920 22:23:31.064253   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.064261   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:31.064268   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:31.064313   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:31.100029   65164 cri.go:89] found id: ""
	I0920 22:23:31.100059   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.100067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:31.100073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:31.100118   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:31.134290   65164 cri.go:89] found id: ""
	I0920 22:23:31.134328   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.134339   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:31.134346   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:31.134404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:31.174453   65164 cri.go:89] found id: ""
	I0920 22:23:31.174483   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.174493   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:31.174500   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:31.174558   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:31.209768   65164 cri.go:89] found id: ""
	I0920 22:23:31.209792   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.209801   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:31.209812   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:31.209823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:31.288640   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:31.288674   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:31.329681   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:31.329707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:31.388046   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:31.388090   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:31.402872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:31.402900   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:31.476528   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:33.977657   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:33.991331   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:33.991407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:34.027932   65164 cri.go:89] found id: ""
	I0920 22:23:34.027960   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.027970   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:34.027979   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:34.028036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:34.070369   65164 cri.go:89] found id: ""
	I0920 22:23:34.070395   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.070403   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:34.070411   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:34.070471   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:34.107467   65164 cri.go:89] found id: ""
	I0920 22:23:34.107493   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.107501   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:34.107507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:34.107556   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:34.148177   65164 cri.go:89] found id: ""
	I0920 22:23:34.148203   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.148211   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:34.148217   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:34.148274   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:34.185721   65164 cri.go:89] found id: ""
	I0920 22:23:34.185751   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.185762   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:34.185775   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:34.185834   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:34.221089   65164 cri.go:89] found id: ""
	I0920 22:23:34.221110   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.221118   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:34.221124   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:34.221167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:34.256222   65164 cri.go:89] found id: ""
	I0920 22:23:34.256244   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.256252   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:34.256257   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:34.256305   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:34.291640   65164 cri.go:89] found id: ""
	I0920 22:23:34.291665   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.291676   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:34.291688   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:34.291702   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:34.342602   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:34.342642   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:34.356770   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:34.356798   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:34.425461   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:34.425488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:34.425503   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:34.505552   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:34.505591   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:32.932537   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:35.433994   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:33.778173   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:35.778427   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:37.778562   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:36.866421   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:39.364982   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:37.045098   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:37.058954   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:37.059047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:37.094828   65164 cri.go:89] found id: ""
	I0920 22:23:37.094857   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.094867   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:37.094875   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:37.094935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:37.133943   65164 cri.go:89] found id: ""
	I0920 22:23:37.133975   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.133985   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:37.133996   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:37.134056   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:37.171931   65164 cri.go:89] found id: ""
	I0920 22:23:37.171976   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.171988   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:37.171996   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:37.172064   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:37.209342   65164 cri.go:89] found id: ""
	I0920 22:23:37.209367   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.209375   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:37.209381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:37.209431   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:37.250832   65164 cri.go:89] found id: ""
	I0920 22:23:37.250866   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.250877   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:37.250885   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:37.250984   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:37.292507   65164 cri.go:89] found id: ""
	I0920 22:23:37.292533   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.292541   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:37.292547   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:37.292603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:37.326576   65164 cri.go:89] found id: ""
	I0920 22:23:37.326604   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.326612   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:37.326618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:37.326675   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:37.362983   65164 cri.go:89] found id: ""
	I0920 22:23:37.363010   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.363020   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:37.363030   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:37.363043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:37.418526   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:37.418564   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:37.434155   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:37.434180   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:37.521264   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:37.521292   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:37.521303   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:37.603628   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:37.603663   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:40.146295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:40.160861   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:40.160932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:40.194739   65164 cri.go:89] found id: ""
	I0920 22:23:40.194762   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.194769   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:40.194776   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:40.194836   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:40.230753   65164 cri.go:89] found id: ""
	I0920 22:23:40.230781   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.230792   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:40.230800   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:40.230860   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:40.268363   65164 cri.go:89] found id: ""
	I0920 22:23:40.268392   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.268403   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:40.268411   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:40.268468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:40.307882   65164 cri.go:89] found id: ""
	I0920 22:23:40.307916   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.307925   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:40.307932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:40.307993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:40.344334   65164 cri.go:89] found id: ""
	I0920 22:23:40.344367   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.344378   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:40.344385   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:40.344445   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:40.385113   65164 cri.go:89] found id: ""
	I0920 22:23:40.385146   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.385157   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:40.385168   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:40.385269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:40.422532   65164 cri.go:89] found id: ""
	I0920 22:23:40.422557   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.422568   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:40.422576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:40.422635   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:40.463425   65164 cri.go:89] found id: ""
	I0920 22:23:40.463451   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.463462   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:40.463472   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:40.463486   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:40.518314   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:40.518350   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:40.534689   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:40.534718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:40.610484   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:40.610509   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:40.610525   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:40.697040   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:40.697076   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:37.930996   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:40.429855   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:42.431458   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:40.279291   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:42.778158   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:41.366031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:43.367298   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:43.240502   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:43.254888   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:43.254989   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:43.294646   65164 cri.go:89] found id: ""
	I0920 22:23:43.294672   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.294686   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:43.294699   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:43.294763   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:43.330802   65164 cri.go:89] found id: ""
	I0920 22:23:43.330828   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.330839   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:43.330846   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:43.330911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:43.369668   65164 cri.go:89] found id: ""
	I0920 22:23:43.369692   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.369702   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:43.369710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:43.369780   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:43.409992   65164 cri.go:89] found id: ""
	I0920 22:23:43.410032   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.410045   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:43.410054   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:43.410116   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:43.451838   65164 cri.go:89] found id: ""
	I0920 22:23:43.451870   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.451882   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:43.451889   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:43.451945   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:43.489973   65164 cri.go:89] found id: ""
	I0920 22:23:43.490010   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.490021   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:43.490027   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:43.490087   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:43.529027   65164 cri.go:89] found id: ""
	I0920 22:23:43.529051   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.529060   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:43.529066   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:43.529121   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:43.567324   65164 cri.go:89] found id: ""
	I0920 22:23:43.567354   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.567366   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:43.567377   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:43.567390   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:43.618859   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:43.618899   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:43.632929   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:43.632970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:43.713084   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:43.713108   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:43.713123   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:43.795977   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:43.796018   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:44.931817   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:46.940596   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:44.778817   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:47.277419   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:45.866458   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:48.365654   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:46.337737   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:46.352031   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:46.352117   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:46.391292   65164 cri.go:89] found id: ""
	I0920 22:23:46.391316   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.391328   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:46.391335   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:46.391397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:46.432554   65164 cri.go:89] found id: ""
	I0920 22:23:46.432576   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.432584   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:46.432590   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:46.432669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:46.469609   65164 cri.go:89] found id: ""
	I0920 22:23:46.469638   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.469649   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:46.469657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:46.469718   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:46.509671   65164 cri.go:89] found id: ""
	I0920 22:23:46.509702   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.509714   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:46.509721   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:46.509791   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:46.549707   65164 cri.go:89] found id: ""
	I0920 22:23:46.549737   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.549747   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:46.549755   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:46.549825   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:46.591886   65164 cri.go:89] found id: ""
	I0920 22:23:46.591914   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.591924   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:46.591931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:46.591982   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:46.629358   65164 cri.go:89] found id: ""
	I0920 22:23:46.629384   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.629391   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:46.629397   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:46.629447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:46.664854   65164 cri.go:89] found id: ""
	I0920 22:23:46.664880   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.664888   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:46.664897   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:46.664908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:46.706941   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:46.706994   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:46.760617   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:46.760651   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:46.774526   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:46.774552   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:46.841947   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:46.841977   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:46.841995   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.423720   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:49.439375   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:49.439466   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:49.476032   65164 cri.go:89] found id: ""
	I0920 22:23:49.476056   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.476064   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:49.476070   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:49.476129   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:49.513099   65164 cri.go:89] found id: ""
	I0920 22:23:49.513128   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.513138   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:49.513144   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:49.513194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:49.551645   65164 cri.go:89] found id: ""
	I0920 22:23:49.551671   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.551682   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:49.551689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:49.551750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:49.592269   65164 cri.go:89] found id: ""
	I0920 22:23:49.592295   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.592305   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:49.592312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:49.592371   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:49.629934   65164 cri.go:89] found id: ""
	I0920 22:23:49.629963   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.629971   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:49.629978   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:49.630033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:49.668013   65164 cri.go:89] found id: ""
	I0920 22:23:49.668041   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.668051   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:49.668058   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:49.668119   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:49.706531   65164 cri.go:89] found id: ""
	I0920 22:23:49.706561   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.706572   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:49.706580   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:49.706644   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:49.744130   65164 cri.go:89] found id: ""
	I0920 22:23:49.744168   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.744180   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:49.744192   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:49.744212   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:49.814987   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:49.815015   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:49.815031   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.897468   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:49.897505   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:49.936778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:49.936804   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:49.990840   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:49.990884   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:49.430203   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:51.930280   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:49.777628   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.278385   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:50.864440   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.865694   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.505554   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:52.520975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:52.521043   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:52.565302   65164 cri.go:89] found id: ""
	I0920 22:23:52.565331   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.565340   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:52.565346   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:52.565404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:52.606315   65164 cri.go:89] found id: ""
	I0920 22:23:52.606342   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.606352   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:52.606359   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:52.606416   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:52.641533   65164 cri.go:89] found id: ""
	I0920 22:23:52.641560   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.641574   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:52.641581   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:52.641640   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:52.677372   65164 cri.go:89] found id: ""
	I0920 22:23:52.677400   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.677412   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:52.677419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:52.677479   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:52.714016   65164 cri.go:89] found id: ""
	I0920 22:23:52.714056   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.714067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:52.714075   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:52.714131   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:52.748194   65164 cri.go:89] found id: ""
	I0920 22:23:52.748224   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.748235   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:52.748243   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:52.748306   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:52.785329   65164 cri.go:89] found id: ""
	I0920 22:23:52.785356   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.785366   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:52.785373   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:52.785430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:52.818350   65164 cri.go:89] found id: ""
	I0920 22:23:52.818382   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.818401   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:52.818414   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:52.818429   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:52.862819   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:52.862854   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:52.914773   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:52.914811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:52.928560   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:52.928589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:53.005530   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:53.005553   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:53.005568   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:55.589457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:55.604545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:55.604634   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:55.643361   65164 cri.go:89] found id: ""
	I0920 22:23:55.643385   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.643394   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:55.643404   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:55.643458   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:55.678738   65164 cri.go:89] found id: ""
	I0920 22:23:55.678762   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.678770   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:55.678775   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:55.678820   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:55.714107   65164 cri.go:89] found id: ""
	I0920 22:23:55.714137   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.714153   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:55.714160   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:55.714221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:55.749093   65164 cri.go:89] found id: ""
	I0920 22:23:55.749118   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.749127   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:55.749135   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:55.749194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:55.786138   65164 cri.go:89] found id: ""
	I0920 22:23:55.786166   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.786177   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:55.786185   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:55.786243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:55.820406   65164 cri.go:89] found id: ""
	I0920 22:23:55.820431   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.820440   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:55.820446   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:55.820509   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:55.854792   65164 cri.go:89] found id: ""
	I0920 22:23:55.854820   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.854829   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:55.854834   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:55.854889   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:53.931782   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:56.430505   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:54.278800   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:56.779261   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:55.368778   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:57.865901   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:55.891926   65164 cri.go:89] found id: ""
	I0920 22:23:55.891957   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.891967   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:55.891977   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:55.891991   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:55.947642   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:55.947677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:55.961872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:55.961903   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:56.038004   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:56.038035   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:56.038049   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:56.129637   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:56.129682   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:58.670325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:58.685501   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:58.685564   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:58.728232   65164 cri.go:89] found id: ""
	I0920 22:23:58.728262   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.728274   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:58.728282   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:58.728339   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:58.762617   65164 cri.go:89] found id: ""
	I0920 22:23:58.762647   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.762657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:58.762665   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:58.762727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:58.799427   65164 cri.go:89] found id: ""
	I0920 22:23:58.799454   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.799464   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:58.799471   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:58.799533   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:58.840100   65164 cri.go:89] found id: ""
	I0920 22:23:58.840126   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.840133   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:58.840140   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:58.840202   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:58.879097   65164 cri.go:89] found id: ""
	I0920 22:23:58.879123   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.879131   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:58.879137   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:58.879185   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:58.914839   65164 cri.go:89] found id: ""
	I0920 22:23:58.914879   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.914889   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:58.914897   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:58.914954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:58.957890   65164 cri.go:89] found id: ""
	I0920 22:23:58.957917   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.957925   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:58.957931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:58.957992   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:58.996355   65164 cri.go:89] found id: ""
	I0920 22:23:58.996380   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.996390   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:58.996400   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:58.996419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:59.010139   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:59.010162   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:59.083543   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:59.083567   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:59.083584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:59.158430   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:59.158470   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:59.199488   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:59.199515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:58.430584   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:00.431120   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:59.278671   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:01.777131   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:00.366154   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:02.866892   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:01.755931   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:01.770610   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:01.770684   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:01.806498   65164 cri.go:89] found id: ""
	I0920 22:24:01.806532   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.806544   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:01.806552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:01.806611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:01.849461   65164 cri.go:89] found id: ""
	I0920 22:24:01.849495   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.849505   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:01.849512   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:01.849570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:01.886634   65164 cri.go:89] found id: ""
	I0920 22:24:01.886674   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.886686   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:01.886693   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:01.886756   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:01.921036   65164 cri.go:89] found id: ""
	I0920 22:24:01.921064   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.921072   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:01.921079   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:01.921138   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:01.959066   65164 cri.go:89] found id: ""
	I0920 22:24:01.959097   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.959109   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:01.959116   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:01.959182   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:01.997105   65164 cri.go:89] found id: ""
	I0920 22:24:01.997135   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.997146   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:01.997155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:01.997215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:02.039220   65164 cri.go:89] found id: ""
	I0920 22:24:02.039245   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.039254   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:02.039259   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:02.039318   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:02.080646   65164 cri.go:89] found id: ""
	I0920 22:24:02.080675   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.080684   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:02.080691   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:02.080704   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:02.132408   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:02.132442   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:02.147039   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:02.147065   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:02.226357   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:02.226383   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:02.226402   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:02.310492   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:02.310527   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:04.854325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:04.869319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:04.869386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:04.904951   65164 cri.go:89] found id: ""
	I0920 22:24:04.904981   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.904992   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:04.905001   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:04.905062   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:04.942283   65164 cri.go:89] found id: ""
	I0920 22:24:04.942307   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.942314   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:04.942319   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:04.942366   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:04.981814   65164 cri.go:89] found id: ""
	I0920 22:24:04.981841   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.981851   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:04.981858   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:04.981914   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:05.016282   65164 cri.go:89] found id: ""
	I0920 22:24:05.016307   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.016316   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:05.016321   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:05.016369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:05.059185   65164 cri.go:89] found id: ""
	I0920 22:24:05.059215   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.059226   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:05.059233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:05.059286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:05.097814   65164 cri.go:89] found id: ""
	I0920 22:24:05.097836   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.097845   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:05.097851   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:05.097904   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:05.133890   65164 cri.go:89] found id: ""
	I0920 22:24:05.133918   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.133926   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:05.133932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:05.133993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:05.170616   65164 cri.go:89] found id: ""
	I0920 22:24:05.170641   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.170649   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:05.170657   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:05.170668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:05.221889   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:05.221921   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:05.235836   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:05.235863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:05.312145   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:05.312172   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:05.312188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:05.397092   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:05.397128   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:02.930186   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:04.931380   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.429840   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:03.778224   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:06.278765   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:08.279756   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:05.365583   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.865287   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:09.865560   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.938892   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:07.954037   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:07.954103   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:07.992847   65164 cri.go:89] found id: ""
	I0920 22:24:07.992872   65164 logs.go:276] 0 containers: []
	W0920 22:24:07.992880   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:07.992887   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:07.992971   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:08.030102   65164 cri.go:89] found id: ""
	I0920 22:24:08.030136   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.030148   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:08.030155   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:08.030215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:08.071156   65164 cri.go:89] found id: ""
	I0920 22:24:08.071187   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.071202   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:08.071209   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:08.071264   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:08.111668   65164 cri.go:89] found id: ""
	I0920 22:24:08.111699   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.111712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:08.111719   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:08.111777   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:08.148289   65164 cri.go:89] found id: ""
	I0920 22:24:08.148322   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.148333   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:08.148340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:08.148402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:08.189154   65164 cri.go:89] found id: ""
	I0920 22:24:08.189176   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.189184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:08.189191   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:08.189242   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:08.225338   65164 cri.go:89] found id: ""
	I0920 22:24:08.225363   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.225374   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:08.225381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:08.225439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:08.274986   65164 cri.go:89] found id: ""
	I0920 22:24:08.275015   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.275026   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:08.275036   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:08.275052   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:08.376769   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:08.376801   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:08.376819   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:08.455971   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:08.456008   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:08.496996   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:08.497026   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:08.549634   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:08.549669   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:09.930508   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:12.430716   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:10.778077   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:13.278116   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:12.365884   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:14.866612   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:11.065317   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:11.079438   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:11.079521   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:11.113628   65164 cri.go:89] found id: ""
	I0920 22:24:11.113657   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.113673   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:11.113680   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:11.113741   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:11.147698   65164 cri.go:89] found id: ""
	I0920 22:24:11.147727   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.147738   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:11.147745   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:11.147810   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:11.185153   65164 cri.go:89] found id: ""
	I0920 22:24:11.185183   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.185194   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:11.185202   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:11.185261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:11.220575   65164 cri.go:89] found id: ""
	I0920 22:24:11.220602   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.220611   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:11.220619   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:11.220677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:11.257062   65164 cri.go:89] found id: ""
	I0920 22:24:11.257095   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.257107   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:11.257117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:11.257174   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:11.295945   65164 cri.go:89] found id: ""
	I0920 22:24:11.295972   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.295982   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:11.295987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:11.296037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:11.335049   65164 cri.go:89] found id: ""
	I0920 22:24:11.335072   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.335080   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:11.335085   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:11.335134   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:11.377047   65164 cri.go:89] found id: ""
	I0920 22:24:11.377074   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.377085   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:11.377094   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:11.377108   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:11.430338   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:11.430364   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:11.445225   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:11.445249   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:11.517988   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:11.518010   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:11.518024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:11.596776   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:11.596813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:14.140982   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:14.158445   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:14.158524   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:14.208478   65164 cri.go:89] found id: ""
	I0920 22:24:14.208508   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.208518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:14.208526   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:14.208585   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:14.266224   65164 cri.go:89] found id: ""
	I0920 22:24:14.266256   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.266268   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:14.266276   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:14.266334   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:14.309917   65164 cri.go:89] found id: ""
	I0920 22:24:14.309943   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.309954   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:14.309968   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:14.310026   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:14.348491   65164 cri.go:89] found id: ""
	I0920 22:24:14.348516   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.348523   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:14.348530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:14.348575   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:14.389170   65164 cri.go:89] found id: ""
	I0920 22:24:14.389197   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.389205   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:14.389212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:14.389261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:14.426006   65164 cri.go:89] found id: ""
	I0920 22:24:14.426034   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.426045   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:14.426052   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:14.426113   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:14.461496   65164 cri.go:89] found id: ""
	I0920 22:24:14.461519   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.461526   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:14.461532   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:14.461588   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:14.501424   65164 cri.go:89] found id: ""
	I0920 22:24:14.501457   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.501474   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:14.501486   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:14.501499   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:14.550920   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:14.550954   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:14.565293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:14.565322   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:14.638031   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:14.638059   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:14.638071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:14.715631   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:14.715697   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:14.430748   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:16.930571   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:15.777268   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.778385   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.365248   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:19.366168   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.256947   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:17.269939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:17.270009   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:17.305349   65164 cri.go:89] found id: ""
	I0920 22:24:17.305373   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.305384   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:17.305391   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:17.305453   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:17.347917   65164 cri.go:89] found id: ""
	I0920 22:24:17.347947   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.347959   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:17.347967   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:17.348024   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:17.383366   65164 cri.go:89] found id: ""
	I0920 22:24:17.383387   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.383396   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:17.383402   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:17.383462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:17.418055   65164 cri.go:89] found id: ""
	I0920 22:24:17.418079   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.418086   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:17.418092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:17.418156   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:17.457651   65164 cri.go:89] found id: ""
	I0920 22:24:17.457680   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.457691   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:17.457698   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:17.457758   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:17.492527   65164 cri.go:89] found id: ""
	I0920 22:24:17.492557   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.492568   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:17.492576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:17.492627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:17.528964   65164 cri.go:89] found id: ""
	I0920 22:24:17.528993   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.529004   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:17.529017   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:17.529080   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:17.565286   65164 cri.go:89] found id: ""
	I0920 22:24:17.565315   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.565326   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:17.565336   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:17.565351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:17.617255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:17.617290   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:17.631897   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:17.631925   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:17.700829   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:17.700851   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:17.700863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:17.782597   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:17.782633   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.327305   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:20.340641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:20.340731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:20.378129   65164 cri.go:89] found id: ""
	I0920 22:24:20.378156   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.378167   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:20.378174   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:20.378228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:20.414202   65164 cri.go:89] found id: ""
	I0920 22:24:20.414230   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.414240   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:20.414247   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:20.414307   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:20.451599   65164 cri.go:89] found id: ""
	I0920 22:24:20.451631   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.451641   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:20.451649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:20.451708   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:20.486998   65164 cri.go:89] found id: ""
	I0920 22:24:20.487026   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.487034   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:20.487040   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:20.487091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:20.525623   65164 cri.go:89] found id: ""
	I0920 22:24:20.525651   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.525661   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:20.525669   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:20.525727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:20.560930   65164 cri.go:89] found id: ""
	I0920 22:24:20.560956   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.560967   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:20.560975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:20.561037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:20.598358   65164 cri.go:89] found id: ""
	I0920 22:24:20.598385   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.598396   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:20.598403   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:20.598468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:20.633645   65164 cri.go:89] found id: ""
	I0920 22:24:20.633671   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.633683   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:20.633694   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:20.633707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.678847   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:20.678880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:20.731758   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:20.731803   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:20.747298   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:20.747328   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:20.822433   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:20.822459   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:20.822473   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:18.932797   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:21.430477   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:19.778546   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:22.277786   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:21.866777   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:24.365161   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:23.401037   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:23.414055   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:23.414135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:23.452483   65164 cri.go:89] found id: ""
	I0920 22:24:23.452511   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.452522   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:23.452529   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:23.452587   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:23.491951   65164 cri.go:89] found id: ""
	I0920 22:24:23.491979   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.491987   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:23.491993   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:23.492039   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:23.533381   65164 cri.go:89] found id: ""
	I0920 22:24:23.533413   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.533424   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:23.533432   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:23.533492   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:23.572442   65164 cri.go:89] found id: ""
	I0920 22:24:23.572475   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.572485   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:23.572493   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:23.572554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:23.609161   65164 cri.go:89] found id: ""
	I0920 22:24:23.609189   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.609200   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:23.609207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:23.609272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:23.645692   65164 cri.go:89] found id: ""
	I0920 22:24:23.645721   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.645730   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:23.645737   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:23.645824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:23.681332   65164 cri.go:89] found id: ""
	I0920 22:24:23.681355   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.681363   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:23.681368   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:23.681414   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:23.716497   65164 cri.go:89] found id: ""
	I0920 22:24:23.716524   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.716532   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:23.716541   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:23.716551   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:23.794446   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:23.794485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:23.839598   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:23.839625   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:23.893259   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:23.893295   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:23.907430   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:23.907456   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:23.982193   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:23.431805   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:25.430647   64543 pod_ready.go:82] duration metric: took 4m0.006693945s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	E0920 22:24:25.430668   64543 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 22:24:25.430675   64543 pod_ready.go:39] duration metric: took 4m7.427029914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:24:25.430689   64543 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:24:25.430712   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:25.430753   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:25.478295   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:25.478324   64543 cri.go:89] found id: ""
	I0920 22:24:25.478334   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:25.478395   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.483530   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:25.483600   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:25.525758   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:25.525791   64543 cri.go:89] found id: ""
	I0920 22:24:25.525801   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:25.525855   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.530200   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:25.530272   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:25.581889   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:25.581911   64543 cri.go:89] found id: ""
	I0920 22:24:25.581920   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:25.581976   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.587886   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:25.587948   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:25.634412   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:25.634438   64543 cri.go:89] found id: ""
	I0920 22:24:25.634448   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:25.634508   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.638844   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:25.638907   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:25.674015   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:25.674040   64543 cri.go:89] found id: ""
	I0920 22:24:25.674049   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:25.674109   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.678865   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:25.678935   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:25.715353   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:25.715380   64543 cri.go:89] found id: ""
	I0920 22:24:25.715390   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:25.715445   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.719967   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:25.720038   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:25.759899   64543 cri.go:89] found id: ""
	I0920 22:24:25.759925   64543 logs.go:276] 0 containers: []
	W0920 22:24:25.759933   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:25.759938   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:25.759996   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:25.805111   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:25.805139   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:25.805146   64543 cri.go:89] found id: ""
	I0920 22:24:25.805154   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:25.805225   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.809639   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.813697   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:25.813722   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:25.853582   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:25.853610   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:25.893403   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:25.893450   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:25.939559   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:25.939601   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:25.957288   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:25.957322   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:26.007398   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:26.007427   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:26.048801   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:26.048828   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:26.105292   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:26.105327   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:26.141333   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:26.141367   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:26.659413   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:26.659458   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:26.719990   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:26.720018   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:26.797434   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:26.797480   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:26.956602   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:26.956645   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:24.278169   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.278275   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:28.278844   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.365203   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:28.366087   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.482877   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:26.497511   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:26.497590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:26.534711   65164 cri.go:89] found id: ""
	I0920 22:24:26.534732   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.534739   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:26.534745   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:26.534790   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:26.569664   65164 cri.go:89] found id: ""
	I0920 22:24:26.569696   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.569706   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:26.569714   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:26.569772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:26.610623   65164 cri.go:89] found id: ""
	I0920 22:24:26.610651   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.610663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:26.610671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:26.610726   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:26.648337   65164 cri.go:89] found id: ""
	I0920 22:24:26.648368   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.648379   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:26.648387   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:26.648446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:26.687633   65164 cri.go:89] found id: ""
	I0920 22:24:26.687661   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.687673   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:26.687680   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:26.687752   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:26.724361   65164 cri.go:89] found id: ""
	I0920 22:24:26.724390   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.724410   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:26.724419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:26.724475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:26.766101   65164 cri.go:89] found id: ""
	I0920 22:24:26.766127   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.766137   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:26.766144   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:26.766196   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:26.805844   65164 cri.go:89] found id: ""
	I0920 22:24:26.805872   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.805883   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:26.805894   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:26.805907   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:26.880703   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:26.880725   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:26.880740   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:26.967599   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:26.967635   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:27.017359   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:27.017384   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:27.071335   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:27.071375   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:29.588374   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:29.602233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:29.602308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:29.639528   65164 cri.go:89] found id: ""
	I0920 22:24:29.639551   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.639562   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:29.639570   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:29.639626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:29.677766   65164 cri.go:89] found id: ""
	I0920 22:24:29.677794   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.677810   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:29.677817   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:29.677874   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:29.714616   65164 cri.go:89] found id: ""
	I0920 22:24:29.714652   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.714663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:29.714671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:29.714722   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:29.750760   65164 cri.go:89] found id: ""
	I0920 22:24:29.750787   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.750795   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:29.750802   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:29.750862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:29.791014   65164 cri.go:89] found id: ""
	I0920 22:24:29.791046   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.791057   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:29.791064   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:29.791135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:29.827114   65164 cri.go:89] found id: ""
	I0920 22:24:29.827146   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.827158   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:29.827166   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:29.827221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:29.866063   65164 cri.go:89] found id: ""
	I0920 22:24:29.866087   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.866097   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:29.866105   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:29.866159   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:29.905616   65164 cri.go:89] found id: ""
	I0920 22:24:29.905646   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.905657   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:29.905668   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:29.905681   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:29.990749   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:29.990787   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:30.030931   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:30.030975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:30.084404   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:30.084440   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:30.100878   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:30.100908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:30.175423   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:29.517258   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:29.535431   64543 api_server.go:72] duration metric: took 4m18.790198719s to wait for apiserver process to appear ...
	I0920 22:24:29.535457   64543 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:24:29.535493   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:29.535549   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:29.581802   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:29.581828   64543 cri.go:89] found id: ""
	I0920 22:24:29.581836   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:29.581890   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.590758   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:29.590827   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:29.636548   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:29.636574   64543 cri.go:89] found id: ""
	I0920 22:24:29.636583   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:29.636644   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.641309   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:29.641378   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:29.686989   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:29.687017   64543 cri.go:89] found id: ""
	I0920 22:24:29.687026   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:29.687086   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.691971   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:29.692056   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:29.733852   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:29.733880   64543 cri.go:89] found id: ""
	I0920 22:24:29.733890   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:29.733954   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.738246   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:29.738322   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:29.786495   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:29.786520   64543 cri.go:89] found id: ""
	I0920 22:24:29.786527   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:29.786580   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.791112   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:29.791170   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:29.836075   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:29.836098   64543 cri.go:89] found id: ""
	I0920 22:24:29.836106   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:29.836150   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.840577   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:29.840652   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:29.882111   64543 cri.go:89] found id: ""
	I0920 22:24:29.882142   64543 logs.go:276] 0 containers: []
	W0920 22:24:29.882154   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:29.882161   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:29.882247   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:29.926138   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:29.926163   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:29.926169   64543 cri.go:89] found id: ""
	I0920 22:24:29.926177   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:29.926235   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.930772   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.934923   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:29.934952   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:30.052134   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:30.052171   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:30.101772   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:30.101795   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:30.143728   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:30.143760   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:30.189176   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:30.189204   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:30.246393   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:30.246423   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:30.289832   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:30.289864   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:30.712613   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:30.712648   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:30.783930   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:30.783962   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:30.838078   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:30.838112   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:30.889200   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:30.889235   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:30.924455   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:30.924492   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:30.961916   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:30.961945   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:30.278927   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.777947   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:30.867260   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.868094   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.675815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:32.689449   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:32.689517   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:32.725045   65164 cri.go:89] found id: ""
	I0920 22:24:32.725072   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.725080   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:32.725086   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:32.725139   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:32.760529   65164 cri.go:89] found id: ""
	I0920 22:24:32.760555   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.760566   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:32.760572   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:32.760629   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:32.795500   65164 cri.go:89] found id: ""
	I0920 22:24:32.795526   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.795536   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:32.795543   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:32.795604   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:32.834854   65164 cri.go:89] found id: ""
	I0920 22:24:32.834884   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.834895   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:32.834904   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:32.834972   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:32.875928   65164 cri.go:89] found id: ""
	I0920 22:24:32.875956   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.875967   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:32.875975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:32.876038   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:32.910115   65164 cri.go:89] found id: ""
	I0920 22:24:32.910140   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.910149   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:32.910155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:32.910204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:32.944790   65164 cri.go:89] found id: ""
	I0920 22:24:32.944817   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.944828   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:32.944835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:32.944896   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:32.980012   65164 cri.go:89] found id: ""
	I0920 22:24:32.980043   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.980054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:32.980064   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:32.980077   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:33.032827   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:33.032864   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:33.047478   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:33.047507   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:33.124253   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:33.124280   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:33.124294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:33.200625   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:33.200666   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:35.740556   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:35.755559   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:35.755626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:35.796164   65164 cri.go:89] found id: ""
	I0920 22:24:35.796191   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.796200   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:35.796211   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:35.796260   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:35.832565   65164 cri.go:89] found id: ""
	I0920 22:24:35.832591   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.832601   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:35.832609   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:35.832678   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:33.481353   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:24:33.486802   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 200:
	ok
	I0920 22:24:33.487816   64543 api_server.go:141] control plane version: v1.31.1
	I0920 22:24:33.487835   64543 api_server.go:131] duration metric: took 3.952371649s to wait for apiserver health ...
	I0920 22:24:33.487842   64543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:24:33.487865   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:33.487917   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:33.528958   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:33.528989   64543 cri.go:89] found id: ""
	I0920 22:24:33.528999   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:33.529061   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.533727   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:33.533802   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:33.573912   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:33.573937   64543 cri.go:89] found id: ""
	I0920 22:24:33.573947   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:33.574016   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.578918   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:33.578998   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:33.628384   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:33.628416   64543 cri.go:89] found id: ""
	I0920 22:24:33.628427   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:33.628490   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.632875   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:33.632955   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:33.674561   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:33.674588   64543 cri.go:89] found id: ""
	I0920 22:24:33.674597   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:33.674654   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.679984   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:33.680055   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:33.715881   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:33.715906   64543 cri.go:89] found id: ""
	I0920 22:24:33.715914   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:33.715963   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.720339   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:33.720408   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:33.764661   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:33.764680   64543 cri.go:89] found id: ""
	I0920 22:24:33.764687   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:33.764734   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.769479   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:33.769533   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:33.810893   64543 cri.go:89] found id: ""
	I0920 22:24:33.810920   64543 logs.go:276] 0 containers: []
	W0920 22:24:33.810932   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:33.810950   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:33.811023   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:33.848882   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:33.848904   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:33.848908   64543 cri.go:89] found id: ""
	I0920 22:24:33.848914   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:33.848964   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.853597   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.858054   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:33.858074   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:33.908602   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:33.908636   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:33.953823   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:33.953850   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:33.993841   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:33.993868   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:34.009784   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:34.009808   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:34.134468   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:34.134496   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:34.178737   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:34.178772   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:34.218475   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:34.218519   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:34.261246   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:34.261272   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:34.324980   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:34.325018   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:34.364223   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:34.364249   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:34.730213   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:34.730262   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:34.808747   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:34.808786   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:37.357880   64543 system_pods.go:59] 8 kube-system pods found
	I0920 22:24:37.357914   64543 system_pods.go:61] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running
	I0920 22:24:37.357919   64543 system_pods.go:61] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running
	I0920 22:24:37.357923   64543 system_pods.go:61] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running
	I0920 22:24:37.357928   64543 system_pods.go:61] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running
	I0920 22:24:37.357933   64543 system_pods.go:61] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:24:37.357938   64543 system_pods.go:61] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running
	I0920 22:24:37.357946   64543 system_pods.go:61] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:24:37.357960   64543 system_pods.go:61] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:24:37.357969   64543 system_pods.go:74] duration metric: took 3.870121418s to wait for pod list to return data ...
	I0920 22:24:37.357979   64543 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:24:37.361128   64543 default_sa.go:45] found service account: "default"
	I0920 22:24:37.361152   64543 default_sa.go:55] duration metric: took 3.165479ms for default service account to be created ...
	I0920 22:24:37.361160   64543 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:24:37.366724   64543 system_pods.go:86] 8 kube-system pods found
	I0920 22:24:37.366755   64543 system_pods.go:89] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running
	I0920 22:24:37.366764   64543 system_pods.go:89] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running
	I0920 22:24:37.366771   64543 system_pods.go:89] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running
	I0920 22:24:37.366777   64543 system_pods.go:89] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running
	I0920 22:24:37.366781   64543 system_pods.go:89] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:24:37.366785   64543 system_pods.go:89] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running
	I0920 22:24:37.366791   64543 system_pods.go:89] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:24:37.366795   64543 system_pods.go:89] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:24:37.366803   64543 system_pods.go:126] duration metric: took 5.637987ms to wait for k8s-apps to be running ...
	I0920 22:24:37.366809   64543 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:24:37.366865   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:24:37.385292   64543 system_svc.go:56] duration metric: took 18.47174ms WaitForService to wait for kubelet
	I0920 22:24:37.385326   64543 kubeadm.go:582] duration metric: took 4m26.64009895s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:24:37.385352   64543 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:24:37.388635   64543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:24:37.388661   64543 node_conditions.go:123] node cpu capacity is 2
	I0920 22:24:37.388675   64543 node_conditions.go:105] duration metric: took 3.31714ms to run NodePressure ...
	I0920 22:24:37.388690   64543 start.go:241] waiting for startup goroutines ...
	I0920 22:24:37.388699   64543 start.go:246] waiting for cluster config update ...
	I0920 22:24:37.388717   64543 start.go:255] writing updated cluster config ...
	I0920 22:24:37.389073   64543 ssh_runner.go:195] Run: rm -f paused
	I0920 22:24:37.442347   64543 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:24:37.443855   64543 out.go:177] * Done! kubectl is now configured to use "embed-certs-562750" cluster and "default" namespace by default
	I0920 22:24:34.779041   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:36.779446   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:35.365819   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:37.865951   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:35.868115   65164 cri.go:89] found id: ""
	I0920 22:24:35.868138   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.868145   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:35.868152   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:35.868199   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:35.902417   65164 cri.go:89] found id: ""
	I0920 22:24:35.902447   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.902457   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:35.902465   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:35.902531   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:35.939195   65164 cri.go:89] found id: ""
	I0920 22:24:35.939226   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.939242   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:35.939256   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:35.939317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:35.981364   65164 cri.go:89] found id: ""
	I0920 22:24:35.981387   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.981397   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:35.981405   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:35.981462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:36.019938   65164 cri.go:89] found id: ""
	I0920 22:24:36.019968   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.019979   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:36.019987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:36.020049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:36.058764   65164 cri.go:89] found id: ""
	I0920 22:24:36.058788   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.058795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:36.058803   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:36.058812   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:36.098442   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:36.098468   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:36.152150   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:36.152188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:36.168940   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:36.168972   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:36.242112   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:36.242134   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:36.242148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:38.843420   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:38.857665   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:38.857746   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:38.901473   65164 cri.go:89] found id: ""
	I0920 22:24:38.901507   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.901518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:38.901532   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:38.901591   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:38.937980   65164 cri.go:89] found id: ""
	I0920 22:24:38.938012   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.938022   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:38.938030   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:38.938085   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:38.973476   65164 cri.go:89] found id: ""
	I0920 22:24:38.973506   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.973514   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:38.973520   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:38.973579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:39.009144   65164 cri.go:89] found id: ""
	I0920 22:24:39.009177   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.009197   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:39.009205   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:39.009282   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:39.046471   65164 cri.go:89] found id: ""
	I0920 22:24:39.046495   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.046505   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:39.046513   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:39.046570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:39.080274   65164 cri.go:89] found id: ""
	I0920 22:24:39.080303   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.080315   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:39.080323   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:39.080381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:39.117267   65164 cri.go:89] found id: ""
	I0920 22:24:39.117297   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.117307   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:39.117314   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:39.117381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:39.158015   65164 cri.go:89] found id: ""
	I0920 22:24:39.158043   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.158054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:39.158065   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:39.158079   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:39.210324   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:39.210360   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:39.226827   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:39.226857   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:39.299060   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:39.299088   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:39.299157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:39.386377   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:39.386420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:39.277971   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:41.278082   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:43.278136   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:40.365418   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:41.359063   64930 pod_ready.go:82] duration metric: took 4m0.000372202s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" ...
	E0920 22:24:41.359104   64930 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 22:24:41.359120   64930 pod_ready.go:39] duration metric: took 4m12.013463696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:24:41.359144   64930 kubeadm.go:597] duration metric: took 4m19.989072639s to restartPrimaryControlPlane
	W0920 22:24:41.359208   64930 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:24:41.359236   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:24:41.925762   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:41.939840   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:41.939911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:41.979910   65164 cri.go:89] found id: ""
	I0920 22:24:41.979943   65164 logs.go:276] 0 containers: []
	W0920 22:24:41.979964   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:41.979973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:41.980036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:42.018711   65164 cri.go:89] found id: ""
	I0920 22:24:42.018738   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.018747   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:42.018752   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:42.018796   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:42.056718   65164 cri.go:89] found id: ""
	I0920 22:24:42.056741   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.056749   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:42.056756   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:42.056826   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:42.093609   65164 cri.go:89] found id: ""
	I0920 22:24:42.093639   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.093650   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:42.093657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:42.093729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:42.130129   65164 cri.go:89] found id: ""
	I0920 22:24:42.130157   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.130167   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:42.130175   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:42.130233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:42.166747   65164 cri.go:89] found id: ""
	I0920 22:24:42.166796   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.166807   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:42.166815   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:42.166875   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:42.213926   65164 cri.go:89] found id: ""
	I0920 22:24:42.213960   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.213971   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:42.213982   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:42.214047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:42.251645   65164 cri.go:89] found id: ""
	I0920 22:24:42.251671   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.251681   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:42.251693   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:42.251707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:42.265133   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:42.265165   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:42.340018   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:42.340044   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:42.340058   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:42.435006   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:42.435056   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:42.478991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:42.479022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:45.032066   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:45.048658   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:45.048734   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:45.086708   65164 cri.go:89] found id: ""
	I0920 22:24:45.086733   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.086743   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:45.086750   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:45.086815   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:45.124311   65164 cri.go:89] found id: ""
	I0920 22:24:45.124342   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.124353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:45.124360   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:45.124407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:45.164253   65164 cri.go:89] found id: ""
	I0920 22:24:45.164283   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.164293   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:45.164301   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:45.164364   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:45.206382   65164 cri.go:89] found id: ""
	I0920 22:24:45.206412   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.206421   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:45.206429   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:45.206494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:45.241949   65164 cri.go:89] found id: ""
	I0920 22:24:45.241977   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.241986   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:45.241991   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:45.242052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:45.283718   65164 cri.go:89] found id: ""
	I0920 22:24:45.283743   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.283752   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:45.283757   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:45.283812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:45.331523   65164 cri.go:89] found id: ""
	I0920 22:24:45.331564   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.331576   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:45.331585   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:45.331672   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:45.372428   65164 cri.go:89] found id: ""
	I0920 22:24:45.372459   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.372469   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:45.372479   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:45.372496   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:45.387751   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:45.387786   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:45.470354   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:45.470377   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:45.470392   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:45.557865   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:45.557902   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:45.600123   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:45.600153   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:45.278256   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:47.278977   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:48.153277   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:48.167796   65164 kubeadm.go:597] duration metric: took 4m4.375477562s to restartPrimaryControlPlane
	W0920 22:24:48.167875   65164 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:24:48.167906   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:24:48.626343   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:24:48.644488   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:24:48.655891   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:24:48.666885   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:24:48.666910   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:24:48.666985   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:24:48.676979   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:24:48.677048   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:24:48.687890   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:24:48.697890   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:24:48.697955   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:24:48.708412   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.719024   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:24:48.719093   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.729321   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:24:48.739258   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:24:48.739329   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:24:48.749787   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:24:48.825173   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:24:48.825258   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:24:48.982428   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:24:48.982587   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:24:48.982721   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:24:49.194912   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:24:49.196899   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:24:49.196999   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:24:49.197078   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:24:49.197222   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:24:49.197328   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:24:49.197421   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:24:49.197495   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:24:49.197574   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:24:49.197662   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:24:49.197761   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:24:49.197887   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:24:49.197948   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:24:49.198036   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:24:49.468998   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:24:49.613979   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:24:49.868901   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:24:50.071143   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:24:50.093179   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:24:50.094030   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:24:50.094086   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:24:50.232993   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:24:50.234761   65164 out.go:235]   - Booting up control plane ...
	I0920 22:24:50.234871   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:24:50.247358   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:24:50.248714   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:24:50.249690   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:24:50.252874   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:24:49.778284   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:52.277872   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:54.278087   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:56.278439   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:58.777573   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:00.777673   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:02.777879   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.503130   64930 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.143868713s)
	I0920 22:25:07.503224   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:07.519558   64930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:25:07.530384   64930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:25:07.540441   64930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:25:07.540469   64930 kubeadm.go:157] found existing configuration files:
	
	I0920 22:25:07.540521   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 22:25:07.550627   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:25:07.550694   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:25:07.560193   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 22:25:07.569908   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:25:07.569966   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:25:07.580326   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 22:25:07.592893   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:25:07.592969   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:25:07.603183   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 22:25:07.613494   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:25:07.613558   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:25:07.623271   64930 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:25:07.671307   64930 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:25:07.671611   64930 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:25:07.801578   64930 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:25:07.801721   64930 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:25:07.801867   64930 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:25:07.811576   64930 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:25:04.778293   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.278844   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.814524   64930 out.go:235]   - Generating certificates and keys ...
	I0920 22:25:07.814630   64930 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:25:07.814714   64930 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:25:07.814828   64930 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:25:07.814911   64930 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:25:07.815048   64930 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:25:07.815155   64930 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:25:07.815261   64930 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:25:07.815374   64930 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:25:07.815473   64930 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:25:07.815580   64930 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:25:07.815640   64930 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:25:07.815728   64930 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:25:08.129976   64930 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:25:08.305286   64930 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:25:08.489838   64930 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:25:08.656531   64930 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:25:08.909653   64930 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:25:08.910197   64930 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:25:08.913016   64930 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:25:08.914949   64930 out.go:235]   - Booting up control plane ...
	I0920 22:25:08.915080   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:25:08.915160   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:25:08.915571   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:25:08.935196   64930 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:25:08.941127   64930 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:25:08.941188   64930 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:25:09.077238   64930 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:25:09.077420   64930 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:25:09.580067   64930 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.879816ms
	I0920 22:25:09.580155   64930 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:25:09.779112   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:11.779164   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:14.582062   64930 kubeadm.go:310] [api-check] The API server is healthy after 5.001748542s
	I0920 22:25:14.600094   64930 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:25:14.622724   64930 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:25:14.657734   64930 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:25:14.657985   64930 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-799783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:25:14.673021   64930 kubeadm.go:310] [bootstrap-token] Using token: xygl58.yjxbp1o582mz9yse
	I0920 22:25:14.675015   64930 out.go:235]   - Configuring RBAC rules ...
	I0920 22:25:14.675149   64930 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:25:14.682527   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:25:14.696372   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:25:14.700087   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:25:14.703910   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:25:14.707739   64930 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:25:14.991815   64930 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:25:15.451193   64930 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:25:15.990452   64930 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:25:15.990476   64930 kubeadm.go:310] 
	I0920 22:25:15.990551   64930 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:25:15.990561   64930 kubeadm.go:310] 
	I0920 22:25:15.990692   64930 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:25:15.990720   64930 kubeadm.go:310] 
	I0920 22:25:15.990765   64930 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:25:15.990873   64930 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:25:15.990983   64930 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:25:15.990998   64930 kubeadm.go:310] 
	I0920 22:25:15.991082   64930 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:25:15.991091   64930 kubeadm.go:310] 
	I0920 22:25:15.991157   64930 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:25:15.991169   64930 kubeadm.go:310] 
	I0920 22:25:15.991248   64930 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:25:15.991352   64930 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:25:15.991458   64930 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:25:15.991469   64930 kubeadm.go:310] 
	I0920 22:25:15.991587   64930 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:25:15.991695   64930 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:25:15.991705   64930 kubeadm.go:310] 
	I0920 22:25:15.991828   64930 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token xygl58.yjxbp1o582mz9yse \
	I0920 22:25:15.991969   64930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 22:25:15.991996   64930 kubeadm.go:310] 	--control-plane 
	I0920 22:25:15.992004   64930 kubeadm.go:310] 
	I0920 22:25:15.992116   64930 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:25:15.992128   64930 kubeadm.go:310] 
	I0920 22:25:15.992237   64930 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token xygl58.yjxbp1o582mz9yse \
	I0920 22:25:15.992374   64930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 22:25:15.993135   64930 kubeadm.go:310] W0920 22:25:07.638229    2519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:25:15.993525   64930 kubeadm.go:310] W0920 22:25:07.639053    2519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:25:15.993662   64930 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:25:15.993697   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:25:15.993707   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:25:15.995878   64930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:25:13.779628   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:16.279390   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:15.997224   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:25:16.009209   64930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:25:16.056563   64930 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:25:16.056636   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:16.056650   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-799783 minikube.k8s.io/updated_at=2024_09_20T22_25_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=default-k8s-diff-port-799783 minikube.k8s.io/primary=true
	I0920 22:25:16.306668   64930 ops.go:34] apiserver oom_adj: -16
	I0920 22:25:16.306700   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:16.807170   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:17.306777   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:17.807844   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:18.307336   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:18.807397   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:19.307726   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:19.807086   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:20.306789   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:20.440925   64930 kubeadm.go:1113] duration metric: took 4.384353815s to wait for elevateKubeSystemPrivileges
	I0920 22:25:20.440962   64930 kubeadm.go:394] duration metric: took 4m59.125193253s to StartCluster
	I0920 22:25:20.440983   64930 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:25:20.441088   64930 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:25:20.442725   64930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:25:20.443037   64930 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:25:20.443113   64930 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:25:20.443191   64930 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443209   64930 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-799783"
	W0920 22:25:20.443217   64930 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:25:20.443243   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.443268   64930 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:25:20.443324   64930 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443344   64930 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-799783"
	I0920 22:25:20.443654   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.443690   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.443738   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.443746   64930 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443766   64930 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-799783"
	I0920 22:25:20.443776   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0920 22:25:20.443784   64930 addons.go:243] addon metrics-server should already be in state true
	I0920 22:25:20.443814   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.444185   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.444234   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.445296   64930 out.go:177] * Verifying Kubernetes components...
	I0920 22:25:20.447251   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:25:20.460387   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36667
	I0920 22:25:20.460405   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40197
	I0920 22:25:20.460831   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.460845   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.461346   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.461369   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.461486   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.461511   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.461774   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.461877   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.462078   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.462361   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.462402   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.463069   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0920 22:25:20.463410   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.463880   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.463900   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.465938   64930 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-799783"
	W0920 22:25:20.465969   64930 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:25:20.465998   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.466384   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.466422   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.467044   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.467706   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.467742   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.482478   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I0920 22:25:20.483079   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.483326   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0920 22:25:20.483597   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.483621   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.483758   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.483921   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.484249   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.484269   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.484562   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.484633   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.484683   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.484899   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.487182   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.489324   64930 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:25:20.489534   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38485
	I0920 22:25:20.489908   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.490596   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.490619   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.490892   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.491065   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.491195   64930 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:25:20.491216   64930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:25:20.491235   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.493798   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.494931   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.495325   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.495346   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.495482   64930 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:25:20.495649   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.495816   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.495967   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.496091   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.496876   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:25:20.496899   64930 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:25:20.496919   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.500143   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.500691   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.500715   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.500900   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.501086   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.501243   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.501378   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.502515   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34479
	I0920 22:25:20.502834   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.503415   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.503435   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.503858   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.504115   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.506006   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.506217   64930 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:25:20.506234   64930 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:25:20.506253   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.509565   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.510128   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.510175   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.510373   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.510558   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.510702   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.510837   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.644780   64930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:25:20.662529   64930 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-799783" to be "Ready" ...
	I0920 22:25:20.717900   64930 node_ready.go:49] node "default-k8s-diff-port-799783" has status "Ready":"True"
	I0920 22:25:20.717930   64930 node_ready.go:38] duration metric: took 55.371957ms for node "default-k8s-diff-port-799783" to be "Ready" ...
	I0920 22:25:20.717959   64930 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:20.751849   64930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:20.793517   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:25:20.827184   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:25:20.827210   64930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:25:20.830914   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:25:20.882548   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:25:20.882577   64930 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:25:20.942208   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:25:20.942235   64930 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:25:20.998935   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:25:21.567298   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567313   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567333   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567338   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567602   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.567648   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.567677   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.567694   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567705   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567921   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.567947   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.567952   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569272   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.569294   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569344   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.569361   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.569606   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.569641   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569641   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.589899   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.589919   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.590252   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.590267   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.590273   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.842284   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.842313   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.842673   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.842674   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.842703   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.842716   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.842726   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.843052   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.843086   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.843100   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.843578   64930 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-799783"
	I0920 22:25:21.847202   64930 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 22:25:18.778824   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:20.779763   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:23.279489   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:21.849188   64930 addons.go:510] duration metric: took 1.406076182s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 22:25:22.761019   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:23.758480   64930 pod_ready.go:93] pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:23.758502   64930 pod_ready.go:82] duration metric: took 3.006618308s for pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:23.758514   64930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:25.776826   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:27.777926   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:25.766257   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:28.265538   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:29.784175   64930 pod_ready.go:93] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.784202   64930 pod_ready.go:82] duration metric: took 6.02568215s for pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.784213   64930 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.790351   64930 pod_ready.go:93] pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.790375   64930 pod_ready.go:82] duration metric: took 6.155652ms for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.790383   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.796400   64930 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.796423   64930 pod_ready.go:82] duration metric: took 6.034163ms for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.796432   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.801470   64930 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.801495   64930 pod_ready.go:82] duration metric: took 5.056014ms for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.801504   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h7j48" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.806664   64930 pod_ready.go:93] pod "kube-proxy-h7j48" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.806683   64930 pod_ready.go:82] duration metric: took 5.17339ms for pod "kube-proxy-h7j48" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.806691   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:30.163945   64930 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:30.163973   64930 pod_ready.go:82] duration metric: took 357.276064ms for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:30.163981   64930 pod_ready.go:39] duration metric: took 9.446007204s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:30.163995   64930 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:25:30.164067   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:25:30.180030   64930 api_server.go:72] duration metric: took 9.736951732s to wait for apiserver process to appear ...
	I0920 22:25:30.180063   64930 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:25:30.180084   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:25:30.184582   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 200:
	ok
	I0920 22:25:30.185462   64930 api_server.go:141] control plane version: v1.31.1
	I0920 22:25:30.185482   64930 api_server.go:131] duration metric: took 5.41308ms to wait for apiserver health ...
	I0920 22:25:30.185490   64930 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:25:30.254069   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:25:30.255250   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:30.255483   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:30.366383   64930 system_pods.go:59] 9 kube-system pods found
	I0920 22:25:30.366415   64930 system_pods.go:61] "coredns-7c65d6cfc9-dpwmm" [6bd8b8dc-b176-4d2a-affc-63e02eb9fba3] Running
	I0920 22:25:30.366421   64930 system_pods.go:61] "coredns-7c65d6cfc9-lgvzf" [58852f70-36e8-4972-8a6f-b04d8fc9d9f3] Running
	I0920 22:25:30.366425   64930 system_pods.go:61] "etcd-default-k8s-diff-port-799783" [50316bfb-5f45-4685-a331-896f9ecbf691] Running
	I0920 22:25:30.366429   64930 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799783" [f5148101-179c-45e3-a7fc-b4b8a95c8b13] Running
	I0920 22:25:30.366433   64930 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799783" [c4c595bb-40bd-4ad6-b121-43f539fc9d76] Running
	I0920 22:25:30.366436   64930 system_pods.go:61] "kube-proxy-h7j48" [0fe6b074-a959-4f74-a981-e2303126c3ab] Running
	I0920 22:25:30.366439   64930 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799783" [369b18a4-6c12-403a-889d-4365264a29bf] Running
	I0920 22:25:30.366444   64930 system_pods.go:61] "metrics-server-6867b74b74-pjcvw" [0b1b708c-664e-4d6c-8da2-79941793d8d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:25:30.366448   64930 system_pods.go:61] "storage-provisioner" [6149617c-dd2c-4ac4-ad1b-80f4943d966f] Running
	I0920 22:25:30.366455   64930 system_pods.go:74] duration metric: took 180.959929ms to wait for pod list to return data ...
	I0920 22:25:30.366462   64930 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:25:30.563810   64930 default_sa.go:45] found service account: "default"
	I0920 22:25:30.563838   64930 default_sa.go:55] duration metric: took 197.369932ms for default service account to be created ...
	I0920 22:25:30.563848   64930 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:25:30.767257   64930 system_pods.go:86] 9 kube-system pods found
	I0920 22:25:30.767285   64930 system_pods.go:89] "coredns-7c65d6cfc9-dpwmm" [6bd8b8dc-b176-4d2a-affc-63e02eb9fba3] Running
	I0920 22:25:30.767291   64930 system_pods.go:89] "coredns-7c65d6cfc9-lgvzf" [58852f70-36e8-4972-8a6f-b04d8fc9d9f3] Running
	I0920 22:25:30.767295   64930 system_pods.go:89] "etcd-default-k8s-diff-port-799783" [50316bfb-5f45-4685-a331-896f9ecbf691] Running
	I0920 22:25:30.767309   64930 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-799783" [f5148101-179c-45e3-a7fc-b4b8a95c8b13] Running
	I0920 22:25:30.767313   64930 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-799783" [c4c595bb-40bd-4ad6-b121-43f539fc9d76] Running
	I0920 22:25:30.767318   64930 system_pods.go:89] "kube-proxy-h7j48" [0fe6b074-a959-4f74-a981-e2303126c3ab] Running
	I0920 22:25:30.767323   64930 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-799783" [369b18a4-6c12-403a-889d-4365264a29bf] Running
	I0920 22:25:30.767331   64930 system_pods.go:89] "metrics-server-6867b74b74-pjcvw" [0b1b708c-664e-4d6c-8da2-79941793d8d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:25:30.767337   64930 system_pods.go:89] "storage-provisioner" [6149617c-dd2c-4ac4-ad1b-80f4943d966f] Running
	I0920 22:25:30.767348   64930 system_pods.go:126] duration metric: took 203.493638ms to wait for k8s-apps to be running ...
	I0920 22:25:30.767362   64930 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:25:30.767405   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:30.784553   64930 system_svc.go:56] duration metric: took 17.183184ms WaitForService to wait for kubelet
	I0920 22:25:30.784583   64930 kubeadm.go:582] duration metric: took 10.341510649s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:25:30.784612   64930 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:25:30.963140   64930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:25:30.963166   64930 node_conditions.go:123] node cpu capacity is 2
	I0920 22:25:30.963177   64930 node_conditions.go:105] duration metric: took 178.56013ms to run NodePressure ...
	I0920 22:25:30.963187   64930 start.go:241] waiting for startup goroutines ...
	I0920 22:25:30.963194   64930 start.go:246] waiting for cluster config update ...
	I0920 22:25:30.963204   64930 start.go:255] writing updated cluster config ...
	I0920 22:25:30.963504   64930 ssh_runner.go:195] Run: rm -f paused
	I0920 22:25:31.013117   64930 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:25:31.015183   64930 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-799783" cluster and "default" namespace by default
	I0920 22:25:29.779849   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:31.772018   64076 pod_ready.go:82] duration metric: took 4m0.000779939s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" ...
	E0920 22:25:31.772048   64076 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 22:25:31.772065   64076 pod_ready.go:39] duration metric: took 4m8.986742832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:31.772089   64076 kubeadm.go:597] duration metric: took 4m16.808660952s to restartPrimaryControlPlane
	W0920 22:25:31.772139   64076 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:25:31.772162   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:25:35.255954   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:35.256195   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:45.256746   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:45.257006   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:58.118910   64076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.346718333s)
	I0920 22:25:58.118998   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:58.138485   64076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:25:58.150312   64076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:25:58.163089   64076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:25:58.163114   64076 kubeadm.go:157] found existing configuration files:
	
	I0920 22:25:58.163165   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:25:58.173792   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:25:58.173847   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:25:58.185186   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:25:58.195835   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:25:58.195893   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:25:58.206121   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:25:58.216016   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:25:58.216076   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:25:58.226242   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:25:58.235841   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:25:58.235907   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:25:58.245684   64076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:25:58.295781   64076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:25:58.295913   64076 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:25:58.411688   64076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:25:58.411802   64076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:25:58.411912   64076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:25:58.421924   64076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:25:58.424166   64076 out.go:235]   - Generating certificates and keys ...
	I0920 22:25:58.424264   64076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:25:58.424355   64076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:25:58.424474   64076 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:25:58.424562   64076 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:25:58.424667   64076 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:25:58.424735   64076 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:25:58.424804   64076 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:25:58.424874   64076 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:25:58.424955   64076 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:25:58.425072   64076 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:25:58.425145   64076 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:25:58.425227   64076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:25:58.541762   64076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:25:58.784016   64076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:25:58.942020   64076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:25:59.024750   64076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:25:59.348415   64076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:25:59.348852   64076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:25:59.351325   64076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:25:59.353421   64076 out.go:235]   - Booting up control plane ...
	I0920 22:25:59.353551   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:25:59.353681   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:25:59.354049   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:25:59.373482   64076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:25:59.383527   64076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:25:59.383614   64076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:25:59.518812   64076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:25:59.519013   64076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:26:00.520303   64076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001563044s
	I0920 22:26:00.520408   64076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:26:05.522834   64076 kubeadm.go:310] [api-check] The API server is healthy after 5.002467309s
	I0920 22:26:05.536106   64076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:26:05.552715   64076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:26:05.578398   64076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:26:05.578647   64076 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-032884 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:26:05.590816   64076 kubeadm.go:310] [bootstrap-token] Using token: 3j77z9.7tnukhkmyvtf2f59
	I0920 22:26:05.258121   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:05.258367   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:05.592300   64076 out.go:235]   - Configuring RBAC rules ...
	I0920 22:26:05.592451   64076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:26:05.600458   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:26:05.608494   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:26:05.612079   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:26:05.617101   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:26:05.627400   64076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:26:05.930323   64076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:26:06.388375   64076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:26:06.934433   64076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:26:06.934459   64076 kubeadm.go:310] 
	I0920 22:26:06.934541   64076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:26:06.934552   64076 kubeadm.go:310] 
	I0920 22:26:06.934644   64076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:26:06.934655   64076 kubeadm.go:310] 
	I0920 22:26:06.934694   64076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:26:06.934824   64076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:26:06.934916   64076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:26:06.934950   64076 kubeadm.go:310] 
	I0920 22:26:06.935050   64076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:26:06.935067   64076 kubeadm.go:310] 
	I0920 22:26:06.935125   64076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:26:06.935136   64076 kubeadm.go:310] 
	I0920 22:26:06.935222   64076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:26:06.935325   64076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:26:06.935429   64076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:26:06.935438   64076 kubeadm.go:310] 
	I0920 22:26:06.935567   64076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:26:06.935719   64076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:26:06.935734   64076 kubeadm.go:310] 
	I0920 22:26:06.935841   64076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3j77z9.7tnukhkmyvtf2f59 \
	I0920 22:26:06.935978   64076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 22:26:06.936005   64076 kubeadm.go:310] 	--control-plane 
	I0920 22:26:06.936013   64076 kubeadm.go:310] 
	I0920 22:26:06.936116   64076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:26:06.936130   64076 kubeadm.go:310] 
	I0920 22:26:06.936241   64076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3j77z9.7tnukhkmyvtf2f59 \
	I0920 22:26:06.936376   64076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 22:26:06.937407   64076 kubeadm.go:310] W0920 22:25:58.249567    2984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:26:06.937837   64076 kubeadm.go:310] W0920 22:25:58.250340    2984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:26:06.937982   64076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:26:06.938009   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:26:06.938021   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:26:06.939944   64076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:26:06.941272   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:26:06.952238   64076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:26:06.973391   64076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:26:06.973493   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:06.973530   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-032884 minikube.k8s.io/updated_at=2024_09_20T22_26_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=no-preload-032884 minikube.k8s.io/primary=true
	I0920 22:26:07.020571   64076 ops.go:34] apiserver oom_adj: -16
	I0920 22:26:07.194238   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:07.694543   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:08.194516   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:08.694927   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:09.195240   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:09.694399   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.194851   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.695301   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.816261   64076 kubeadm.go:1113] duration metric: took 3.842834289s to wait for elevateKubeSystemPrivileges
	I0920 22:26:10.816307   64076 kubeadm.go:394] duration metric: took 4m55.906657261s to StartCluster
	I0920 22:26:10.816334   64076 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:26:10.816423   64076 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:26:10.818137   64076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:26:10.818430   64076 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:26:10.818544   64076 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:26:10.818635   64076 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:26:10.818650   64076 addons.go:69] Setting storage-provisioner=true in profile "no-preload-032884"
	I0920 22:26:10.818673   64076 addons.go:234] Setting addon storage-provisioner=true in "no-preload-032884"
	I0920 22:26:10.818683   64076 addons.go:69] Setting default-storageclass=true in profile "no-preload-032884"
	W0920 22:26:10.818695   64076 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:26:10.818695   64076 addons.go:69] Setting metrics-server=true in profile "no-preload-032884"
	I0920 22:26:10.818709   64076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-032884"
	I0920 22:26:10.818731   64076 addons.go:234] Setting addon metrics-server=true in "no-preload-032884"
	I0920 22:26:10.818739   64076 host.go:66] Checking if "no-preload-032884" exists ...
	W0920 22:26:10.818744   64076 addons.go:243] addon metrics-server should already be in state true
	I0920 22:26:10.818769   64076 host.go:66] Checking if "no-preload-032884" exists ...
	I0920 22:26:10.819164   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819180   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819179   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819213   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.819222   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.819331   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.820951   64076 out.go:177] * Verifying Kubernetes components...
	I0920 22:26:10.822713   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:26:10.836801   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0920 22:26:10.837338   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.837962   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.837987   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.838418   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.839070   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.839116   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.840425   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0920 22:26:10.840549   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34443
	I0920 22:26:10.840871   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.840930   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.841366   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.841392   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.841486   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.841510   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.841775   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.841840   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.841983   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.842400   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.842439   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.845507   64076 addons.go:234] Setting addon default-storageclass=true in "no-preload-032884"
	W0920 22:26:10.845528   64076 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:26:10.845554   64076 host.go:66] Checking if "no-preload-032884" exists ...
	I0920 22:26:10.845888   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.845930   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.856667   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I0920 22:26:10.857183   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.857701   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.857717   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.858050   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.858233   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.858557   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38281
	I0920 22:26:10.858951   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.859570   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.859590   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.859931   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.860098   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.860214   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.861837   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.862616   64076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:26:10.863540   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0920 22:26:10.863601   64076 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:26:10.863918   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.864542   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.864562   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.864714   64076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:26:10.864732   64076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:26:10.864750   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.864852   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.865319   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.865348   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.865572   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:26:10.865590   64076 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:26:10.865611   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.868063   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868471   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.868493   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868500   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868687   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.868968   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.868990   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.869012   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.869160   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.869216   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.869298   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:10.869356   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.869480   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.869584   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:10.908239   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0920 22:26:10.910305   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.910864   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.910889   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.911286   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.911498   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.913134   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.913353   64076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:26:10.913369   64076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:26:10.913385   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.916151   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.916720   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.916751   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.916974   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.917122   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.917235   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.917322   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:11.062110   64076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:26:11.091616   64076 node_ready.go:35] waiting up to 6m0s for node "no-preload-032884" to be "Ready" ...
	I0920 22:26:11.109422   64076 node_ready.go:49] node "no-preload-032884" has status "Ready":"True"
	I0920 22:26:11.109446   64076 node_ready.go:38] duration metric: took 17.79737ms for node "no-preload-032884" to be "Ready" ...
	I0920 22:26:11.109455   64076 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:26:11.117996   64076 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:11.162435   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:26:11.251093   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:26:11.251120   64076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:26:11.267736   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:26:11.300802   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:26:11.300828   64076 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:26:11.417756   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:26:11.417784   64076 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:26:11.467144   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:26:11.490750   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.490776   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.491084   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.491093   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:11.491103   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.491127   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.491142   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.492893   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:11.492899   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.492913   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.526730   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.526758   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.527068   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.527113   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.527142   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.029394   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.029419   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.029728   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.029734   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.029753   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.029780   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.029788   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.030049   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.030060   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.725440   64076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.258247375s)
	I0920 22:26:12.725497   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.725509   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.725835   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.725896   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.725909   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.725929   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.725937   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.726239   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.726256   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.726268   64076 addons.go:475] Verifying addon metrics-server=true in "no-preload-032884"
	I0920 22:26:12.728264   64076 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 22:26:12.729421   64076 addons.go:510] duration metric: took 1.910882335s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 22:26:13.131514   64076 pod_ready.go:103] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:15.624357   64076 pod_ready.go:103] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:16.624382   64076 pod_ready.go:93] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:16.624410   64076 pod_ready.go:82] duration metric: took 5.506388589s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:16.624419   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:18.631927   64076 pod_ready.go:103] pod "kube-apiserver-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:19.132302   64076 pod_ready.go:93] pod "kube-apiserver-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.132326   64076 pod_ready.go:82] duration metric: took 2.507900074s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.132336   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.137816   64076 pod_ready.go:93] pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.137841   64076 pod_ready.go:82] duration metric: took 5.497911ms for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.137853   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fdc7z" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.143195   64076 pod_ready.go:93] pod "kube-proxy-fdc7z" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.143225   64076 pod_ready.go:82] duration metric: took 5.363183ms for pod "kube-proxy-fdc7z" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.143236   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.148171   64076 pod_ready.go:93] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.148193   64076 pod_ready.go:82] duration metric: took 4.948914ms for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.148200   64076 pod_ready.go:39] duration metric: took 8.038734304s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:26:19.148214   64076 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:26:19.148259   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:26:19.167764   64076 api_server.go:72] duration metric: took 8.349299959s to wait for apiserver process to appear ...
	I0920 22:26:19.167789   64076 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:26:19.167809   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:26:19.173104   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 200:
	ok
	I0920 22:26:19.174150   64076 api_server.go:141] control plane version: v1.31.1
	I0920 22:26:19.174171   64076 api_server.go:131] duration metric: took 6.376066ms to wait for apiserver health ...
	I0920 22:26:19.174178   64076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:26:19.181050   64076 system_pods.go:59] 9 kube-system pods found
	I0920 22:26:19.181083   64076 system_pods.go:61] "coredns-7c65d6cfc9-fbxtz" [30e7b041-7437-4dec-bea2-4512c1c233d8] Running
	I0920 22:26:19.181091   64076 system_pods.go:61] "coredns-7c65d6cfc9-jsx2x" [6b4bdbac-ba8e-434b-80b7-6ade0c7cc363] Running
	I0920 22:26:19.181097   64076 system_pods.go:61] "etcd-no-preload-032884" [dd023e30-352b-4578-85b8-27fed6b64110] Running
	I0920 22:26:19.181102   64076 system_pods.go:61] "kube-apiserver-no-preload-032884" [5a644708-3886-4534-b936-b4931e804a04] Running
	I0920 22:26:19.181106   64076 system_pods.go:61] "kube-controller-manager-no-preload-032884" [6c871a06-842c-4588-a16f-2ceb8eef6565] Running
	I0920 22:26:19.181113   64076 system_pods.go:61] "kube-proxy-fdc7z" [7a28610e-aa20-4aed-a9a4-1e125f03445b] Running
	I0920 22:26:19.181117   64076 system_pods.go:61] "kube-scheduler-no-preload-032884" [6d878841-99c2-4b0a-95fa-01fbc185e78c] Running
	I0920 22:26:19.181126   64076 system_pods.go:61] "metrics-server-6867b74b74-xvbg9" [33c2a642-1336-4e1c-98cf-a1941710c39f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:26:19.181131   64076 system_pods.go:61] "storage-provisioner" [18366542-5571-4ff9-885b-f3aaa3106905] Running
	I0920 22:26:19.181143   64076 system_pods.go:74] duration metric: took 6.958801ms to wait for pod list to return data ...
	I0920 22:26:19.181152   64076 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:26:19.185152   64076 default_sa.go:45] found service account: "default"
	I0920 22:26:19.185178   64076 default_sa.go:55] duration metric: took 4.016584ms for default service account to be created ...
	I0920 22:26:19.185188   64076 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:26:19.333687   64076 system_pods.go:86] 9 kube-system pods found
	I0920 22:26:19.333721   64076 system_pods.go:89] "coredns-7c65d6cfc9-fbxtz" [30e7b041-7437-4dec-bea2-4512c1c233d8] Running
	I0920 22:26:19.333729   64076 system_pods.go:89] "coredns-7c65d6cfc9-jsx2x" [6b4bdbac-ba8e-434b-80b7-6ade0c7cc363] Running
	I0920 22:26:19.333734   64076 system_pods.go:89] "etcd-no-preload-032884" [dd023e30-352b-4578-85b8-27fed6b64110] Running
	I0920 22:26:19.333740   64076 system_pods.go:89] "kube-apiserver-no-preload-032884" [5a644708-3886-4534-b936-b4931e804a04] Running
	I0920 22:26:19.333748   64076 system_pods.go:89] "kube-controller-manager-no-preload-032884" [6c871a06-842c-4588-a16f-2ceb8eef6565] Running
	I0920 22:26:19.333753   64076 system_pods.go:89] "kube-proxy-fdc7z" [7a28610e-aa20-4aed-a9a4-1e125f03445b] Running
	I0920 22:26:19.333759   64076 system_pods.go:89] "kube-scheduler-no-preload-032884" [6d878841-99c2-4b0a-95fa-01fbc185e78c] Running
	I0920 22:26:19.333768   64076 system_pods.go:89] "metrics-server-6867b74b74-xvbg9" [33c2a642-1336-4e1c-98cf-a1941710c39f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:26:19.333778   64076 system_pods.go:89] "storage-provisioner" [18366542-5571-4ff9-885b-f3aaa3106905] Running
	I0920 22:26:19.333792   64076 system_pods.go:126] duration metric: took 148.596279ms to wait for k8s-apps to be running ...
	I0920 22:26:19.333804   64076 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:26:19.333859   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:26:19.352068   64076 system_svc.go:56] duration metric: took 18.255575ms WaitForService to wait for kubelet
	I0920 22:26:19.352101   64076 kubeadm.go:582] duration metric: took 8.533641688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:26:19.352120   64076 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:26:19.529468   64076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:26:19.529499   64076 node_conditions.go:123] node cpu capacity is 2
	I0920 22:26:19.529510   64076 node_conditions.go:105] duration metric: took 177.384986ms to run NodePressure ...
	I0920 22:26:19.529521   64076 start.go:241] waiting for startup goroutines ...
	I0920 22:26:19.529527   64076 start.go:246] waiting for cluster config update ...
	I0920 22:26:19.529536   64076 start.go:255] writing updated cluster config ...
	I0920 22:26:19.529794   64076 ssh_runner.go:195] Run: rm -f paused
	I0920 22:26:19.578797   64076 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:26:19.581150   64076 out.go:177] * Done! kubectl is now configured to use "no-preload-032884" cluster and "default" namespace by default
	I0920 22:26:45.260713   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:45.260972   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:45.260987   65164 kubeadm.go:310] 
	I0920 22:26:45.261036   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:26:45.261079   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:26:45.261089   65164 kubeadm.go:310] 
	I0920 22:26:45.261139   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:26:45.261186   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:26:45.261335   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:26:45.261345   65164 kubeadm.go:310] 
	I0920 22:26:45.261491   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:26:45.261525   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:26:45.261554   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:26:45.261560   65164 kubeadm.go:310] 
	I0920 22:26:45.261648   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:26:45.261717   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:26:45.261723   65164 kubeadm.go:310] 
	I0920 22:26:45.261812   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:26:45.261917   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:26:45.262021   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:26:45.262113   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:26:45.262126   65164 kubeadm.go:310] 
	I0920 22:26:45.263170   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:26:45.263288   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:26:45.263381   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 22:26:45.263497   65164 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 22:26:45.263544   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:26:50.654066   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.390496163s)
	I0920 22:26:50.654143   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:26:50.670366   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:26:50.682320   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:26:50.682340   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:26:50.682395   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:26:50.692427   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:26:50.692501   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:26:50.703008   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:26:50.713027   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:26:50.713090   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:26:50.724408   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.733746   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:26:50.733805   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.744669   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:26:50.753996   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:26:50.754061   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:26:50.764484   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:26:50.836541   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:26:50.836606   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:26:50.979844   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:26:50.979987   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:26:50.980117   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:26:51.172382   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:26:51.174186   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:26:51.174313   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:26:51.174410   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:26:51.174550   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:26:51.174640   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:26:51.174768   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:26:51.174855   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:26:51.175252   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:26:51.175637   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:26:51.176082   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:26:51.176400   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:26:51.176484   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:26:51.176562   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:26:51.391581   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:26:51.474902   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:26:51.620828   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:26:51.838880   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:26:51.856375   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:26:51.857522   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:26:51.857615   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:26:52.014694   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:26:52.016872   65164 out.go:235]   - Booting up control plane ...
	I0920 22:26:52.017002   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:26:52.019915   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:26:52.021285   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:26:52.022532   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:26:52.026230   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:27:32.029658   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:27:32.029764   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:32.030051   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:37.030416   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:37.030634   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:47.031397   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:47.031644   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:07.031668   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:07.031918   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030492   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:47.030797   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030821   65164 kubeadm.go:310] 
	I0920 22:28:47.030874   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:28:47.030912   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:28:47.030919   65164 kubeadm.go:310] 
	I0920 22:28:47.030982   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:28:47.031046   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:28:47.031204   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:28:47.031225   65164 kubeadm.go:310] 
	I0920 22:28:47.031371   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:28:47.031425   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:28:47.031462   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:28:47.031466   65164 kubeadm.go:310] 
	I0920 22:28:47.031605   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:28:47.031727   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:28:47.031739   65164 kubeadm.go:310] 
	I0920 22:28:47.031880   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:28:47.032003   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:28:47.032102   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:28:47.032202   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:28:47.032212   65164 kubeadm.go:310] 
	I0920 22:28:47.033045   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:28:47.033157   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:28:47.033247   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 22:28:47.033315   65164 kubeadm.go:394] duration metric: took 8m3.299765846s to StartCluster
	I0920 22:28:47.033360   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:28:47.033452   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:28:47.081201   65164 cri.go:89] found id: ""
	I0920 22:28:47.081225   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.081233   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:28:47.081239   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:28:47.081289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:28:47.119048   65164 cri.go:89] found id: ""
	I0920 22:28:47.119081   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.119089   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:28:47.119094   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:28:47.119140   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:28:47.164690   65164 cri.go:89] found id: ""
	I0920 22:28:47.164719   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.164730   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:28:47.164738   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:28:47.164806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:28:47.201003   65164 cri.go:89] found id: ""
	I0920 22:28:47.201035   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.201046   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:28:47.201053   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:28:47.201114   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:28:47.238478   65164 cri.go:89] found id: ""
	I0920 22:28:47.238510   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.238522   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:28:47.238530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:28:47.238579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:28:47.274372   65164 cri.go:89] found id: ""
	I0920 22:28:47.274406   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.274415   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:28:47.274422   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:28:47.274487   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:28:47.310155   65164 cri.go:89] found id: ""
	I0920 22:28:47.310181   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.310191   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:28:47.310197   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:28:47.310243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:28:47.350845   65164 cri.go:89] found id: ""
	I0920 22:28:47.350872   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.350882   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:28:47.350894   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:28:47.350908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:28:47.403879   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:28:47.403915   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:28:47.418989   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:28:47.419021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:28:47.511632   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:28:47.511697   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:28:47.511718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:28:47.635613   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:28:47.635653   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0920 22:28:47.677326   65164 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 22:28:47.677383   65164 out.go:270] * 
	W0920 22:28:47.677436   65164 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.677449   65164 out.go:270] * 
	W0920 22:28:47.678365   65164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 22:28:47.681590   65164 out.go:201] 
	W0920 22:28:47.682658   65164 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.682696   65164 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 22:28:47.682719   65164 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 22:28:47.684042   65164 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.666180126Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871721666159109,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1834ff42-82a1-4d59-ad45-97ab5bce5876 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.666764941Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98f95742-b975-4920-8bfa-4781b2f03225 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.666820589Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98f95742-b975-4920-8bfa-4781b2f03225 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.667010869Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9995e2121841ea05690639d90ff5196cfa73946de17e659e8809a2a05a8a5c35,PodSandboxId:23e16e9df66296c5ae7597b34eb9b053d27ae5856f2393f994887156a465c468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726871172876795101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18366542-5571-4ff9-885b-f3aaa3106905,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aea640977127ddba993b9089d6c89888007862070aac3e40529aa958d2dc74,PodSandboxId:076c46c9214ea860a4c38d5c69e14c60553a3078ca209b15c8bcae0ce6d1ff5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172746982410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fbxtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e7b041-7437-4dec-bea2-4512c1c233d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6503e43d3b796dea31680a5898ee3a5e3d889896391a06ddd6f1e53929ceb2fc,PodSandboxId:ed2cc0a35d069564c778184700504dcff002508f09a395a0ae4db84ae07729fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172601213428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsx2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b
4bdbac-ba8e-434b-80b7-6ade0c7cc363,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f617df1f10f6841905181690e6328f1da61d35011100ece9c68f4f479beaa4a,PodSandboxId:b59c5341b5b8cbc4ce9792234a8988cc571e499d822768387c0196ef1b8c4c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726871171889303562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fdc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a28610e-aa20-4aed-a9a4-1e125f03445b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c9a8b2aa3d02b7d84442c2edbb6711e5b14529c5b9bf8fc1ce68c998a2a0a4f,PodSandboxId:a7c1809b0c827b3d05c34efdabccb9363b2921aee7ae8d60594341cf9b4ebe62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871160743753850,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10d6a76354cf2eed8f773b8fecfa0d4f6118a696e9c40dc38829ec0efba2ea0,PodSandboxId:46af1c368db0a142500f5418126fd5f292375d7fbdca44114d0e09548d86625c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871160769537466,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a5cc47b50e627199a483d7c3cabbda6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf9164666f590f04b50ff26c1658fb1eed6093ea79abd571d519a5bc26ff9c2,PodSandboxId:57252c167dbdb5e366725cea136ecc9215ed4b52e2fc6162b9e726dc2d404486,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871160749941975,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12455009d1a25bff0d3404c7d1145b3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c8d91a89410b46ef61f2674cbc417df4d854deb17113a89c6ac3fbc9c0e8f2,PodSandboxId:cb01b1f1caf4d0247f5a2c3f8fc8fdfcd8c64f4bbeee3b519d40408937c7ec03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871160728793630,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b660d87ee7f20af62b42753d807cfded,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476bba60185cc902d43166c05c4527e3fc81f14290b9da513ccfdd2d34439d84,PodSandboxId:225bc4fe6977e04c2f5dfa9f0b1e651ec15b6f1474dc99c89ff05e032d581f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870877906748866,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98f95742-b975-4920-8bfa-4781b2f03225 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.711529914Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd7a349a-4c8e-4a5f-9f51-eca20449acb6 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.711639452Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd7a349a-4c8e-4a5f-9f51-eca20449acb6 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.713158244Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b04ceb07-d347-48b2-abfb-47364572b15e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.714035376Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871721714000894,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b04ceb07-d347-48b2-abfb-47364572b15e name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.714830230Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=892eda1d-aee8-4913-a791-1d5275b22997 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.714902725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=892eda1d-aee8-4913-a791-1d5275b22997 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.715243731Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9995e2121841ea05690639d90ff5196cfa73946de17e659e8809a2a05a8a5c35,PodSandboxId:23e16e9df66296c5ae7597b34eb9b053d27ae5856f2393f994887156a465c468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726871172876795101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18366542-5571-4ff9-885b-f3aaa3106905,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aea640977127ddba993b9089d6c89888007862070aac3e40529aa958d2dc74,PodSandboxId:076c46c9214ea860a4c38d5c69e14c60553a3078ca209b15c8bcae0ce6d1ff5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172746982410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fbxtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e7b041-7437-4dec-bea2-4512c1c233d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6503e43d3b796dea31680a5898ee3a5e3d889896391a06ddd6f1e53929ceb2fc,PodSandboxId:ed2cc0a35d069564c778184700504dcff002508f09a395a0ae4db84ae07729fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172601213428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsx2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b
4bdbac-ba8e-434b-80b7-6ade0c7cc363,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f617df1f10f6841905181690e6328f1da61d35011100ece9c68f4f479beaa4a,PodSandboxId:b59c5341b5b8cbc4ce9792234a8988cc571e499d822768387c0196ef1b8c4c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726871171889303562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fdc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a28610e-aa20-4aed-a9a4-1e125f03445b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c9a8b2aa3d02b7d84442c2edbb6711e5b14529c5b9bf8fc1ce68c998a2a0a4f,PodSandboxId:a7c1809b0c827b3d05c34efdabccb9363b2921aee7ae8d60594341cf9b4ebe62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871160743753850,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10d6a76354cf2eed8f773b8fecfa0d4f6118a696e9c40dc38829ec0efba2ea0,PodSandboxId:46af1c368db0a142500f5418126fd5f292375d7fbdca44114d0e09548d86625c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871160769537466,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a5cc47b50e627199a483d7c3cabbda6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf9164666f590f04b50ff26c1658fb1eed6093ea79abd571d519a5bc26ff9c2,PodSandboxId:57252c167dbdb5e366725cea136ecc9215ed4b52e2fc6162b9e726dc2d404486,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871160749941975,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12455009d1a25bff0d3404c7d1145b3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c8d91a89410b46ef61f2674cbc417df4d854deb17113a89c6ac3fbc9c0e8f2,PodSandboxId:cb01b1f1caf4d0247f5a2c3f8fc8fdfcd8c64f4bbeee3b519d40408937c7ec03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871160728793630,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b660d87ee7f20af62b42753d807cfded,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476bba60185cc902d43166c05c4527e3fc81f14290b9da513ccfdd2d34439d84,PodSandboxId:225bc4fe6977e04c2f5dfa9f0b1e651ec15b6f1474dc99c89ff05e032d581f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870877906748866,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=892eda1d-aee8-4913-a791-1d5275b22997 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.765297841Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=02d3e943-224c-4d5d-b89c-b397900e6780 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.765400986Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=02d3e943-224c-4d5d-b89c-b397900e6780 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.767050453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a134b38c-3fdc-4ab4-b218-28c2eac3aa37 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.767831243Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871721767794419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a134b38c-3fdc-4ab4-b218-28c2eac3aa37 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.768489901Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05d460f7-c496-4b39-a959-4736e787d193 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.768563789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05d460f7-c496-4b39-a959-4736e787d193 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.768844861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9995e2121841ea05690639d90ff5196cfa73946de17e659e8809a2a05a8a5c35,PodSandboxId:23e16e9df66296c5ae7597b34eb9b053d27ae5856f2393f994887156a465c468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726871172876795101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18366542-5571-4ff9-885b-f3aaa3106905,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aea640977127ddba993b9089d6c89888007862070aac3e40529aa958d2dc74,PodSandboxId:076c46c9214ea860a4c38d5c69e14c60553a3078ca209b15c8bcae0ce6d1ff5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172746982410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fbxtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e7b041-7437-4dec-bea2-4512c1c233d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6503e43d3b796dea31680a5898ee3a5e3d889896391a06ddd6f1e53929ceb2fc,PodSandboxId:ed2cc0a35d069564c778184700504dcff002508f09a395a0ae4db84ae07729fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172601213428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsx2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b
4bdbac-ba8e-434b-80b7-6ade0c7cc363,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f617df1f10f6841905181690e6328f1da61d35011100ece9c68f4f479beaa4a,PodSandboxId:b59c5341b5b8cbc4ce9792234a8988cc571e499d822768387c0196ef1b8c4c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726871171889303562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fdc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a28610e-aa20-4aed-a9a4-1e125f03445b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c9a8b2aa3d02b7d84442c2edbb6711e5b14529c5b9bf8fc1ce68c998a2a0a4f,PodSandboxId:a7c1809b0c827b3d05c34efdabccb9363b2921aee7ae8d60594341cf9b4ebe62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871160743753850,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10d6a76354cf2eed8f773b8fecfa0d4f6118a696e9c40dc38829ec0efba2ea0,PodSandboxId:46af1c368db0a142500f5418126fd5f292375d7fbdca44114d0e09548d86625c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871160769537466,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a5cc47b50e627199a483d7c3cabbda6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf9164666f590f04b50ff26c1658fb1eed6093ea79abd571d519a5bc26ff9c2,PodSandboxId:57252c167dbdb5e366725cea136ecc9215ed4b52e2fc6162b9e726dc2d404486,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871160749941975,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12455009d1a25bff0d3404c7d1145b3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c8d91a89410b46ef61f2674cbc417df4d854deb17113a89c6ac3fbc9c0e8f2,PodSandboxId:cb01b1f1caf4d0247f5a2c3f8fc8fdfcd8c64f4bbeee3b519d40408937c7ec03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871160728793630,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b660d87ee7f20af62b42753d807cfded,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476bba60185cc902d43166c05c4527e3fc81f14290b9da513ccfdd2d34439d84,PodSandboxId:225bc4fe6977e04c2f5dfa9f0b1e651ec15b6f1474dc99c89ff05e032d581f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870877906748866,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05d460f7-c496-4b39-a959-4736e787d193 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.804220429Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8ac69745-a721-4cc1-bf4a-ce8d02b58833 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.804317761Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8ac69745-a721-4cc1-bf4a-ce8d02b58833 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.805550075Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de969804-8807-4a56-9723-e235b172e073 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.805907931Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871721805880879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de969804-8807-4a56-9723-e235b172e073 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.806576579Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=37d86a98-4fb6-4d98-92aa-0201dc322486 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.806629761Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=37d86a98-4fb6-4d98-92aa-0201dc322486 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:35:21 no-preload-032884 crio[715]: time="2024-09-20 22:35:21.806823002Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9995e2121841ea05690639d90ff5196cfa73946de17e659e8809a2a05a8a5c35,PodSandboxId:23e16e9df66296c5ae7597b34eb9b053d27ae5856f2393f994887156a465c468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726871172876795101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18366542-5571-4ff9-885b-f3aaa3106905,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aea640977127ddba993b9089d6c89888007862070aac3e40529aa958d2dc74,PodSandboxId:076c46c9214ea860a4c38d5c69e14c60553a3078ca209b15c8bcae0ce6d1ff5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172746982410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fbxtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e7b041-7437-4dec-bea2-4512c1c233d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6503e43d3b796dea31680a5898ee3a5e3d889896391a06ddd6f1e53929ceb2fc,PodSandboxId:ed2cc0a35d069564c778184700504dcff002508f09a395a0ae4db84ae07729fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172601213428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsx2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b
4bdbac-ba8e-434b-80b7-6ade0c7cc363,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f617df1f10f6841905181690e6328f1da61d35011100ece9c68f4f479beaa4a,PodSandboxId:b59c5341b5b8cbc4ce9792234a8988cc571e499d822768387c0196ef1b8c4c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726871171889303562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fdc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a28610e-aa20-4aed-a9a4-1e125f03445b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c9a8b2aa3d02b7d84442c2edbb6711e5b14529c5b9bf8fc1ce68c998a2a0a4f,PodSandboxId:a7c1809b0c827b3d05c34efdabccb9363b2921aee7ae8d60594341cf9b4ebe62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871160743753850,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10d6a76354cf2eed8f773b8fecfa0d4f6118a696e9c40dc38829ec0efba2ea0,PodSandboxId:46af1c368db0a142500f5418126fd5f292375d7fbdca44114d0e09548d86625c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871160769537466,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a5cc47b50e627199a483d7c3cabbda6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf9164666f590f04b50ff26c1658fb1eed6093ea79abd571d519a5bc26ff9c2,PodSandboxId:57252c167dbdb5e366725cea136ecc9215ed4b52e2fc6162b9e726dc2d404486,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871160749941975,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12455009d1a25bff0d3404c7d1145b3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c8d91a89410b46ef61f2674cbc417df4d854deb17113a89c6ac3fbc9c0e8f2,PodSandboxId:cb01b1f1caf4d0247f5a2c3f8fc8fdfcd8c64f4bbeee3b519d40408937c7ec03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871160728793630,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b660d87ee7f20af62b42753d807cfded,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476bba60185cc902d43166c05c4527e3fc81f14290b9da513ccfdd2d34439d84,PodSandboxId:225bc4fe6977e04c2f5dfa9f0b1e651ec15b6f1474dc99c89ff05e032d581f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870877906748866,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=37d86a98-4fb6-4d98-92aa-0201dc322486 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9995e2121841e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   23e16e9df6629       storage-provisioner
	79aea64097712       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   076c46c9214ea       coredns-7c65d6cfc9-fbxtz
	6503e43d3b796       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   ed2cc0a35d069       coredns-7c65d6cfc9-jsx2x
	4f617df1f10f6       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   b59c5341b5b8c       kube-proxy-fdc7z
	a10d6a76354cf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   46af1c368db0a       etcd-no-preload-032884
	bbf9164666f59       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   57252c167dbdb       kube-controller-manager-no-preload-032884
	1c9a8b2aa3d02       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   a7c1809b0c827       kube-apiserver-no-preload-032884
	92c8d91a89410       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   cb01b1f1caf4d       kube-scheduler-no-preload-032884
	476bba60185cc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   225bc4fe6977e       kube-apiserver-no-preload-032884
	
	
	==> coredns [6503e43d3b796dea31680a5898ee3a5e3d889896391a06ddd6f1e53929ceb2fc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [79aea640977127ddba993b9089d6c89888007862070aac3e40529aa958d2dc74] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-032884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-032884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=no-preload-032884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T22_26_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 22:26:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-032884
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 22:35:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 22:31:22 +0000   Fri, 20 Sep 2024 22:26:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 22:31:22 +0000   Fri, 20 Sep 2024 22:26:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 22:31:22 +0000   Fri, 20 Sep 2024 22:26:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 22:31:22 +0000   Fri, 20 Sep 2024 22:26:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.19
	  Hostname:    no-preload-032884
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5beecf3b2736436692806fb1417eaac1
	  System UUID:                5beecf3b-2736-4366-9280-6fb1417eaac1
	  Boot ID:                    b13194a4-96d9-4ea9-9c38-cdd85010f043
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-fbxtz                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-7c65d6cfc9-jsx2x                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 etcd-no-preload-032884                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-no-preload-032884             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-controller-manager-no-preload-032884    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-proxy-fdc7z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	  kube-system                 kube-scheduler-no-preload-032884             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-xvbg9              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m8s   kube-proxy       
	  Normal  Starting                 9m16s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s  kubelet          Node no-preload-032884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s  kubelet          Node no-preload-032884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s  kubelet          Node no-preload-032884 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s  node-controller  Node no-preload-032884 event: Registered Node no-preload-032884 in Controller
	
	
	==> dmesg <==
	[  +0.058598] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042352] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.410916] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.680604] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.607542] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000066] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.046762] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.055364] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054549] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.181850] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.148133] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.295182] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[Sep20 22:21] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	[  +0.061425] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.714519] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +4.597016] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.169525] kauditd_printk_skb: 85 callbacks suppressed
	[Sep20 22:25] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.330505] systemd-fstab-generator[3010]: Ignoring "noauto" option for root device
	[Sep20 22:26] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.839127] systemd-fstab-generator[3331]: Ignoring "noauto" option for root device
	[  +4.906543] systemd-fstab-generator[3455]: Ignoring "noauto" option for root device
	[  +0.143676] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.165920] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [a10d6a76354cf2eed8f773b8fecfa0d4f6118a696e9c40dc38829ec0efba2ea0] <==
	{"level":"info","ts":"2024-09-20T22:26:01.265476Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-20T22:26:01.265725Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"16426d17e57aa155","initial-advertise-peer-urls":["https://192.168.72.19:2380"],"listen-peer-urls":["https://192.168.72.19:2380"],"advertise-client-urls":["https://192.168.72.19:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.19:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-20T22:26:01.265766Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-20T22:26:01.265895Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.19:2380"}
	{"level":"info","ts":"2024-09-20T22:26:01.265925Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.19:2380"}
	{"level":"info","ts":"2024-09-20T22:26:01.302514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16426d17e57aa155 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T22:26:01.302606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16426d17e57aa155 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T22:26:01.302637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16426d17e57aa155 received MsgPreVoteResp from 16426d17e57aa155 at term 1"}
	{"level":"info","ts":"2024-09-20T22:26:01.302659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16426d17e57aa155 became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T22:26:01.302664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16426d17e57aa155 received MsgVoteResp from 16426d17e57aa155 at term 2"}
	{"level":"info","ts":"2024-09-20T22:26:01.302682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16426d17e57aa155 became leader at term 2"}
	{"level":"info","ts":"2024-09-20T22:26:01.302689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 16426d17e57aa155 elected leader 16426d17e57aa155 at term 2"}
	{"level":"info","ts":"2024-09-20T22:26:01.306741Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:26:01.306985Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"16426d17e57aa155","local-member-attributes":"{Name:no-preload-032884 ClientURLs:[https://192.168.72.19:2379]}","request-path":"/0/members/16426d17e57aa155/attributes","cluster-id":"932de0d4c307fd64","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T22:26:01.309456Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:26:01.309495Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"932de0d4c307fd64","local-member-id":"16426d17e57aa155","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:26:01.309592Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:26:01.309637Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:26:01.309650Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:26:01.312673Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:26:01.313386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.19:2379"}
	{"level":"info","ts":"2024-09-20T22:26:01.319491Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T22:26:01.319528Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T22:26:01.320097Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:26:01.324826Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:35:22 up 14 min,  0 users,  load average: 0.19, 0.17, 0.13
	Linux no-preload-032884 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1c9a8b2aa3d02b7d84442c2edbb6711e5b14529c5b9bf8fc1ce68c998a2a0a4f] <==
	W0920 22:31:04.638061       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:31:04.638310       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 22:31:04.639391       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:31:04.639485       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 22:32:04.640150       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:32:04.640278       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 22:32:04.640181       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:32:04.640371       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 22:32:04.641698       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:32:04.641790       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 22:34:04.642174       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:34:04.642321       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 22:34:04.642386       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:34:04.642517       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 22:34:04.643711       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:34:04.643776       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [476bba60185cc902d43166c05c4527e3fc81f14290b9da513ccfdd2d34439d84] <==
	W0920 22:25:56.864274       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:56.895109       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:56.909114       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.110274       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.118183       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.146622       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.187836       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.203782       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.214314       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.222632       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.328065       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.369285       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.370653       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.384293       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.388790       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.466901       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.487873       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.519166       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.549141       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.550612       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.653600       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.676298       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.688343       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.697844       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.750374       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [bbf9164666f590f04b50ff26c1658fb1eed6093ea79abd571d519a5bc26ff9c2] <==
	E0920 22:30:10.507039       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:30:11.067860       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:30:40.515520       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:30:41.077560       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:31:10.522348       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:31:11.085610       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:31:22.448760       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-032884"
	E0920 22:31:40.529697       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:31:41.094220       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:32:07.278607       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="774.938µs"
	E0920 22:32:10.538825       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:32:11.102522       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:32:21.286992       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="143.811µs"
	E0920 22:32:40.546876       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:32:41.113524       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:33:10.556140       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:33:11.122078       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:33:40.564354       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:33:41.133378       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:34:10.573597       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:34:11.141675       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:34:40.580605       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:34:41.150101       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:35:10.587979       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:35:11.159260       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4f617df1f10f6841905181690e6328f1da61d35011100ece9c68f4f479beaa4a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 22:26:12.896033       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 22:26:12.997043       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.19"]
	E0920 22:26:13.013316       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 22:26:13.218623       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 22:26:13.218671       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 22:26:13.218698       1 server_linux.go:169] "Using iptables Proxier"
	I0920 22:26:13.221531       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 22:26:13.222375       1 server.go:483] "Version info" version="v1.31.1"
	I0920 22:26:13.222503       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 22:26:13.224037       1 config.go:199] "Starting service config controller"
	I0920 22:26:13.224249       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 22:26:13.224356       1 config.go:328] "Starting node config controller"
	I0920 22:26:13.224378       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 22:26:13.224626       1 config.go:105] "Starting endpoint slice config controller"
	I0920 22:26:13.224654       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 22:26:13.324874       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 22:26:13.324961       1 shared_informer.go:320] Caches are synced for service config
	I0920 22:26:13.324942       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [92c8d91a89410b46ef61f2674cbc417df4d854deb17113a89c6ac3fbc9c0e8f2] <==
	W0920 22:26:03.710711       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 22:26:03.710720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:03.710900       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 22:26:03.710934       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 22:26:04.525247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 22:26:04.525406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.540500       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 22:26:04.540861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.613244       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 22:26:04.613280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.652356       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 22:26:04.654559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.828020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 22:26:04.828197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.867207       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 22:26:04.867664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.918211       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 22:26:04.918320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.942806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 22:26:04.942862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.962155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 22:26:04.962254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:05.170322       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 22:26:05.170491       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 22:26:07.392657       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 22:34:16 no-preload-032884 kubelet[3338]: E0920 22:34:16.422063    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871656421697943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:16 no-preload-032884 kubelet[3338]: E0920 22:34:16.422102    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871656421697943,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:17 no-preload-032884 kubelet[3338]: E0920 22:34:17.258487    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xvbg9" podUID="33c2a642-1336-4e1c-98cf-a1941710c39f"
	Sep 20 22:34:26 no-preload-032884 kubelet[3338]: E0920 22:34:26.424726    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871666424051032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:26 no-preload-032884 kubelet[3338]: E0920 22:34:26.424767    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871666424051032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:31 no-preload-032884 kubelet[3338]: E0920 22:34:31.258563    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xvbg9" podUID="33c2a642-1336-4e1c-98cf-a1941710c39f"
	Sep 20 22:34:36 no-preload-032884 kubelet[3338]: E0920 22:34:36.427931    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871676427318190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:36 no-preload-032884 kubelet[3338]: E0920 22:34:36.428814    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871676427318190,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:46 no-preload-032884 kubelet[3338]: E0920 22:34:46.258785    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xvbg9" podUID="33c2a642-1336-4e1c-98cf-a1941710c39f"
	Sep 20 22:34:46 no-preload-032884 kubelet[3338]: E0920 22:34:46.430352    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871686430001180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:46 no-preload-032884 kubelet[3338]: E0920 22:34:46.430594    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871686430001180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:56 no-preload-032884 kubelet[3338]: E0920 22:34:56.433016    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871696432037844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:56 no-preload-032884 kubelet[3338]: E0920 22:34:56.433043    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871696432037844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:34:58 no-preload-032884 kubelet[3338]: E0920 22:34:58.257383    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xvbg9" podUID="33c2a642-1336-4e1c-98cf-a1941710c39f"
	Sep 20 22:35:06 no-preload-032884 kubelet[3338]: E0920 22:35:06.286791    3338 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 22:35:06 no-preload-032884 kubelet[3338]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 22:35:06 no-preload-032884 kubelet[3338]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 22:35:06 no-preload-032884 kubelet[3338]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 22:35:06 no-preload-032884 kubelet[3338]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 22:35:06 no-preload-032884 kubelet[3338]: E0920 22:35:06.435384    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871706434704212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:35:06 no-preload-032884 kubelet[3338]: E0920 22:35:06.435493    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871706434704212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:35:09 no-preload-032884 kubelet[3338]: E0920 22:35:09.257598    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xvbg9" podUID="33c2a642-1336-4e1c-98cf-a1941710c39f"
	Sep 20 22:35:16 no-preload-032884 kubelet[3338]: E0920 22:35:16.438264    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871716436343671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:35:16 no-preload-032884 kubelet[3338]: E0920 22:35:16.438309    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871716436343671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:35:21 no-preload-032884 kubelet[3338]: E0920 22:35:21.256887    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xvbg9" podUID="33c2a642-1336-4e1c-98cf-a1941710c39f"
	
	
	==> storage-provisioner [9995e2121841ea05690639d90ff5196cfa73946de17e659e8809a2a05a8a5c35] <==
	I0920 22:26:13.132117       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 22:26:13.160833       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 22:26:13.161029       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 22:26:13.189563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 22:26:13.189856       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-032884_9597dab5-9886-46ef-92f6-f8d75deb15c5!
	I0920 22:26:13.192953       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a4794ef-8fd8-42cc-b44b-c424ab7bc583", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-032884_9597dab5-9886-46ef-92f6-f8d75deb15c5 became leader
	I0920 22:26:13.292827       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-032884_9597dab5-9886-46ef-92f6-f8d75deb15c5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-032884 -n no-preload-032884
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-032884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-xvbg9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-032884 describe pod metrics-server-6867b74b74-xvbg9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-032884 describe pod metrics-server-6867b74b74-xvbg9: exit status 1 (65.004117ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-xvbg9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-032884 describe pod metrics-server-6867b74b74-xvbg9: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E0920 22:29:11.809027   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E0920 22:30:09.132361   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E0920 22:34:11.809139   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E0920 22:35:09.132246   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-593412 -n old-k8s-version-593412
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-593412 -n old-k8s-version-593412: exit status 2 (234.143461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-593412" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412: exit status 2 (219.839964ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-593412 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-593412 logs -n 25: (1.642337117s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-335786 -- sudo                         | cert-options-335786          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-335786                                 | cert-options-335786          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-696728                           | kubernetes-upgrade-696728    | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:11 UTC |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-032884             | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-440582 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | disable-driver-mounts-440582                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:14 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-562750            | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799783  | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-032884                  | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-593412        | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-562750                 | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC | 20 Sep 24 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799783       | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:25 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-593412             | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC | 20 Sep 24 22:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 22:17:05
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 22:17:05.861648   65164 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:17:05.861774   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.861787   65164 out.go:358] Setting ErrFile to fd 2...
	I0920 22:17:05.861792   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.862021   65164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:17:05.862599   65164 out.go:352] Setting JSON to false
	I0920 22:17:05.863528   65164 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7174,"bootTime":1726863452,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:17:05.863630   65164 start.go:139] virtualization: kvm guest
	I0920 22:17:05.866077   65164 out.go:177] * [old-k8s-version-593412] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:17:05.867484   65164 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:17:05.867531   65164 notify.go:220] Checking for updates...
	I0920 22:17:05.869987   65164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:17:05.871483   65164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:17:05.872745   65164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:17:05.874062   65164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:17:05.875089   65164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:17:05.876578   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:17:05.876975   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.877047   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.891847   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I0920 22:17:05.892301   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.892821   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.892841   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.893134   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.893312   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.895266   65164 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 22:17:05.896322   65164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:17:05.896637   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.896671   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.912301   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37681
	I0920 22:17:05.912667   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.913170   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.913189   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.913475   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.913672   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.949827   65164 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 22:17:05.951258   65164 start.go:297] selected driver: kvm2
	I0920 22:17:05.951275   65164 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.951380   65164 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:17:05.952032   65164 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.952098   65164 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 22:17:05.967645   65164 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 22:17:05.968048   65164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:17:05.968078   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:17:05.968122   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:17:05.968161   65164 start.go:340] cluster config:
	{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.968267   65164 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.970080   65164 out.go:177] * Starting "old-k8s-version-593412" primary control-plane node in "old-k8s-version-593412" cluster
	I0920 22:17:03.331236   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:06.403275   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:05.971180   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:17:05.971218   65164 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 22:17:05.971227   65164 cache.go:56] Caching tarball of preloaded images
	I0920 22:17:05.971292   65164 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 22:17:05.971301   65164 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 22:17:05.971394   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:17:05.971564   65164 start.go:360] acquireMachinesLock for old-k8s-version-593412: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:17:12.483250   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:15.555283   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:21.635247   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:24.707272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:30.787265   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:33.859227   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:39.939281   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:43.011250   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:49.091264   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:52.163272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:58.243272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:01.315252   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:07.395256   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:10.467248   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:16.547257   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:19.619309   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:25.699278   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:28.771275   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:34.851234   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:37.923271   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:44.003246   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:47.075274   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:53.155254   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:56.227303   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:02.307307   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:05.379257   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:11.459278   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:14.531267   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:20.611255   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:23.683284   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:29.763270   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:32.835325   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:35.840483   64543 start.go:364] duration metric: took 3m48.294081485s to acquireMachinesLock for "embed-certs-562750"
	I0920 22:19:35.840533   64543 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:19:35.840542   64543 fix.go:54] fixHost starting: 
	I0920 22:19:35.840886   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:19:35.840934   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:19:35.856524   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I0920 22:19:35.856933   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:19:35.857405   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:19:35.857419   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:19:35.857754   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:19:35.857958   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:35.858095   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:19:35.859845   64543 fix.go:112] recreateIfNeeded on embed-certs-562750: state=Stopped err=<nil>
	I0920 22:19:35.859876   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	W0920 22:19:35.860023   64543 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:19:35.861973   64543 out.go:177] * Restarting existing kvm2 VM for "embed-certs-562750" ...
	I0920 22:19:35.863394   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Start
	I0920 22:19:35.863563   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring networks are active...
	I0920 22:19:35.864411   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring network default is active
	I0920 22:19:35.864746   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring network mk-embed-certs-562750 is active
	I0920 22:19:35.865085   64543 main.go:141] libmachine: (embed-certs-562750) Getting domain xml...
	I0920 22:19:35.865728   64543 main.go:141] libmachine: (embed-certs-562750) Creating domain...
	I0920 22:19:37.094096   64543 main.go:141] libmachine: (embed-certs-562750) Waiting to get IP...
	I0920 22:19:37.095056   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.095580   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.095671   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.095567   66127 retry.go:31] will retry after 248.155801ms: waiting for machine to come up
	I0920 22:19:37.345108   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.345577   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.345634   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.345559   66127 retry.go:31] will retry after 340.822818ms: waiting for machine to come up
	I0920 22:19:35.837792   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:19:35.837839   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:19:35.838152   64076 buildroot.go:166] provisioning hostname "no-preload-032884"
	I0920 22:19:35.838177   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:19:35.838385   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:19:35.840336   64076 machine.go:96] duration metric: took 4m37.421385706s to provisionDockerMachine
	I0920 22:19:35.840382   64076 fix.go:56] duration metric: took 4m37.44364588s for fixHost
	I0920 22:19:35.840392   64076 start.go:83] releasing machines lock for "no-preload-032884", held for 4m37.443690605s
	W0920 22:19:35.840414   64076 start.go:714] error starting host: provision: host is not running
	W0920 22:19:35.840512   64076 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 22:19:35.840521   64076 start.go:729] Will try again in 5 seconds ...
	I0920 22:19:37.688209   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.688587   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.688615   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.688534   66127 retry.go:31] will retry after 321.605412ms: waiting for machine to come up
	I0920 22:19:38.012304   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:38.012829   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:38.012853   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:38.012780   66127 retry.go:31] will retry after 475.245269ms: waiting for machine to come up
	I0920 22:19:38.489354   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:38.489733   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:38.489763   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:38.489687   66127 retry.go:31] will retry after 667.725945ms: waiting for machine to come up
	I0920 22:19:39.158494   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:39.158997   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:39.159024   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:39.158941   66127 retry.go:31] will retry after 582.98832ms: waiting for machine to come up
	I0920 22:19:39.743778   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:39.744147   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:39.744184   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:39.744092   66127 retry.go:31] will retry after 849.746424ms: waiting for machine to come up
	I0920 22:19:40.595098   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:40.595489   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:40.595518   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:40.595440   66127 retry.go:31] will retry after 1.349776305s: waiting for machine to come up
	I0920 22:19:41.947028   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:41.947449   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:41.947472   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:41.947422   66127 retry.go:31] will retry after 1.323416137s: waiting for machine to come up
	I0920 22:19:40.840969   64076 start.go:360] acquireMachinesLock for no-preload-032884: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:19:43.272963   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:43.273397   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:43.273424   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:43.273344   66127 retry.go:31] will retry after 1.986214907s: waiting for machine to come up
	I0920 22:19:45.262662   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:45.263148   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:45.263170   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:45.263095   66127 retry.go:31] will retry after 2.68512379s: waiting for machine to come up
	I0920 22:19:47.950652   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:47.951043   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:47.951069   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:47.950977   66127 retry.go:31] will retry after 3.616930237s: waiting for machine to come up
	I0920 22:19:51.569337   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:51.569741   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:51.569774   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:51.569692   66127 retry.go:31] will retry after 2.803702381s: waiting for machine to come up
	I0920 22:19:55.615900   64930 start.go:364] duration metric: took 3m10.214658006s to acquireMachinesLock for "default-k8s-diff-port-799783"
	I0920 22:19:55.615986   64930 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:19:55.615995   64930 fix.go:54] fixHost starting: 
	I0920 22:19:55.616427   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:19:55.616483   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:19:55.636346   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0920 22:19:55.636780   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:19:55.637258   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:19:55.637279   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:19:55.637578   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:19:55.637770   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:19:55.637911   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:19:55.639942   64930 fix.go:112] recreateIfNeeded on default-k8s-diff-port-799783: state=Stopped err=<nil>
	I0920 22:19:55.639973   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	W0920 22:19:55.640144   64930 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:19:55.642492   64930 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-799783" ...
	I0920 22:19:54.376796   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.377333   64543 main.go:141] libmachine: (embed-certs-562750) Found IP for machine: 192.168.50.193
	I0920 22:19:54.377353   64543 main.go:141] libmachine: (embed-certs-562750) Reserving static IP address...
	I0920 22:19:54.377363   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has current primary IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.377737   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "embed-certs-562750", mac: "52:54:00:be:17:82", ip: "192.168.50.193"} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.377768   64543 main.go:141] libmachine: (embed-certs-562750) DBG | skip adding static IP to network mk-embed-certs-562750 - found existing host DHCP lease matching {name: "embed-certs-562750", mac: "52:54:00:be:17:82", ip: "192.168.50.193"}
	I0920 22:19:54.377787   64543 main.go:141] libmachine: (embed-certs-562750) Reserved static IP address: 192.168.50.193
	I0920 22:19:54.377804   64543 main.go:141] libmachine: (embed-certs-562750) Waiting for SSH to be available...
	I0920 22:19:54.377849   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Getting to WaitForSSH function...
	I0920 22:19:54.379913   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.380197   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.380236   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.380325   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Using SSH client type: external
	I0920 22:19:54.380350   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa (-rw-------)
	I0920 22:19:54.380382   64543 main.go:141] libmachine: (embed-certs-562750) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:19:54.380395   64543 main.go:141] libmachine: (embed-certs-562750) DBG | About to run SSH command:
	I0920 22:19:54.380405   64543 main.go:141] libmachine: (embed-certs-562750) DBG | exit 0
	I0920 22:19:54.507262   64543 main.go:141] libmachine: (embed-certs-562750) DBG | SSH cmd err, output: <nil>: 
	I0920 22:19:54.507667   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetConfigRaw
	I0920 22:19:54.508332   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:54.510768   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.511108   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.511146   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.511441   64543 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/config.json ...
	I0920 22:19:54.511672   64543 machine.go:93] provisionDockerMachine start ...
	I0920 22:19:54.511690   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:54.511883   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.514116   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.514496   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.514524   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.514646   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.514794   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.514931   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.515053   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.515206   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.515457   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.515472   64543 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:19:54.623421   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:19:54.623455   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.623694   64543 buildroot.go:166] provisioning hostname "embed-certs-562750"
	I0920 22:19:54.623717   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.623843   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.626435   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.626733   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.626754   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.626924   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.627122   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.627293   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.627436   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.627594   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.627749   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.627760   64543 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-562750 && echo "embed-certs-562750" | sudo tee /etc/hostname
	I0920 22:19:54.750707   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-562750
	
	I0920 22:19:54.750729   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.753239   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.753512   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.753544   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.753764   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.753968   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.754153   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.754312   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.754481   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.754650   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.754666   64543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-562750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-562750/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-562750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:19:54.872899   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:19:54.872925   64543 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:19:54.872961   64543 buildroot.go:174] setting up certificates
	I0920 22:19:54.872971   64543 provision.go:84] configureAuth start
	I0920 22:19:54.872979   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.873237   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:54.875851   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.876280   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.876302   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.876471   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.878729   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.879117   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.879143   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.879320   64543 provision.go:143] copyHostCerts
	I0920 22:19:54.879370   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:19:54.879378   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:19:54.879467   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:19:54.879581   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:19:54.879591   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:19:54.879624   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:19:54.879694   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:19:54.879703   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:19:54.879736   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:19:54.879799   64543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.embed-certs-562750 san=[127.0.0.1 192.168.50.193 embed-certs-562750 localhost minikube]
	I0920 22:19:54.981669   64543 provision.go:177] copyRemoteCerts
	I0920 22:19:54.981737   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:19:54.981762   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.984500   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.984792   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.984814   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.984986   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.985187   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.985305   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.985424   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.069012   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 22:19:55.094417   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:19:55.119895   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:19:55.145399   64543 provision.go:87] duration metric: took 272.417168ms to configureAuth
	I0920 22:19:55.145433   64543 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:19:55.145627   64543 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:19:55.145719   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.148502   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.148866   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.148891   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.149082   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.149272   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.149399   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.149528   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.149637   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:55.149785   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:55.149796   64543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:19:55.374395   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:19:55.374422   64543 machine.go:96] duration metric: took 862.73711ms to provisionDockerMachine
	I0920 22:19:55.374436   64543 start.go:293] postStartSetup for "embed-certs-562750" (driver="kvm2")
	I0920 22:19:55.374448   64543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:19:55.374463   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.374798   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:19:55.374815   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.377262   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.377649   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.377671   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.377856   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.378054   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.378203   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.378327   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.462119   64543 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:19:55.466515   64543 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:19:55.466544   64543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:19:55.466621   64543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:19:55.466720   64543 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:19:55.466846   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:19:55.476703   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:19:55.501943   64543 start.go:296] duration metric: took 127.489738ms for postStartSetup
	I0920 22:19:55.501987   64543 fix.go:56] duration metric: took 19.661445548s for fixHost
	I0920 22:19:55.502009   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.505054   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.505394   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.505420   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.505606   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.505826   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.505969   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.506160   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.506352   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:55.506532   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:55.506541   64543 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:19:55.615752   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870795.575298210
	
	I0920 22:19:55.615773   64543 fix.go:216] guest clock: 1726870795.575298210
	I0920 22:19:55.615781   64543 fix.go:229] Guest: 2024-09-20 22:19:55.57529821 +0000 UTC Remote: 2024-09-20 22:19:55.501991982 +0000 UTC m=+248.100299917 (delta=73.306228ms)
	I0920 22:19:55.615805   64543 fix.go:200] guest clock delta is within tolerance: 73.306228ms
	I0920 22:19:55.615812   64543 start.go:83] releasing machines lock for "embed-certs-562750", held for 19.7753011s
	I0920 22:19:55.615850   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.616131   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:55.618697   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.619117   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.619144   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.619347   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.619859   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.620028   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.620110   64543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:19:55.620162   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.620261   64543 ssh_runner.go:195] Run: cat /version.json
	I0920 22:19:55.620287   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.622796   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623204   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623233   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.623254   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623440   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.623601   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.623660   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.623681   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623769   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.623880   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.623941   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.624046   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.624175   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.624291   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.731482   64543 ssh_runner.go:195] Run: systemctl --version
	I0920 22:19:55.738176   64543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:19:55.887545   64543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:19:55.894532   64543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:19:55.894610   64543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:19:55.911070   64543 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:19:55.911097   64543 start.go:495] detecting cgroup driver to use...
	I0920 22:19:55.911178   64543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:19:55.928676   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:19:55.943462   64543 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:19:55.943543   64543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:19:55.960910   64543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:19:55.978109   64543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:19:56.101434   64543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:19:56.238737   64543 docker.go:233] disabling docker service ...
	I0920 22:19:56.238814   64543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:19:56.254099   64543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:19:56.268926   64543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:19:56.417839   64543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:19:56.548769   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:19:56.563311   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:19:56.583073   64543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:19:56.583134   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.598212   64543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:19:56.598287   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.610151   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.623499   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.637734   64543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:19:56.649751   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.665619   64543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.689725   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.704698   64543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:19:56.716205   64543 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:19:56.716276   64543 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:19:56.734247   64543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:19:56.745725   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:19:56.879084   64543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:19:56.978332   64543 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:19:56.978400   64543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:19:56.985047   64543 start.go:563] Will wait 60s for crictl version
	I0920 22:19:56.985112   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:19:56.989231   64543 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:19:57.029037   64543 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:19:57.029126   64543 ssh_runner.go:195] Run: crio --version
	I0920 22:19:57.060838   64543 ssh_runner.go:195] Run: crio --version
	I0920 22:19:57.092838   64543 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:19:57.094132   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:57.097205   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:57.097597   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:57.097642   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:57.097865   64543 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 22:19:57.102433   64543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:19:57.116015   64543 kubeadm.go:883] updating cluster {Name:embed-certs-562750 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:19:57.116134   64543 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:19:57.116197   64543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:19:57.156317   64543 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:19:57.156380   64543 ssh_runner.go:195] Run: which lz4
	I0920 22:19:57.160502   64543 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:19:57.164982   64543 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:19:57.165024   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 22:19:55.643600   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Start
	I0920 22:19:55.643811   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring networks are active...
	I0920 22:19:55.644651   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring network default is active
	I0920 22:19:55.644941   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring network mk-default-k8s-diff-port-799783 is active
	I0920 22:19:55.645308   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Getting domain xml...
	I0920 22:19:55.645897   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Creating domain...
	I0920 22:19:56.961175   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting to get IP...
	I0920 22:19:56.961986   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:56.962442   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:56.962522   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:56.962429   66271 retry.go:31] will retry after 219.744487ms: waiting for machine to come up
	I0920 22:19:57.184180   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.184718   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.184751   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.184663   66271 retry.go:31] will retry after 331.58082ms: waiting for machine to come up
	I0920 22:19:57.518314   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.518907   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.518945   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.518878   66271 retry.go:31] will retry after 308.110588ms: waiting for machine to come up
	I0920 22:19:57.829213   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.829754   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.829782   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.829711   66271 retry.go:31] will retry after 421.186303ms: waiting for machine to come up
	I0920 22:19:58.252218   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.252662   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.252689   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:58.252618   66271 retry.go:31] will retry after 465.325985ms: waiting for machine to come up
	I0920 22:19:58.719232   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.719741   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.719769   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:58.719679   66271 retry.go:31] will retry after 865.976725ms: waiting for machine to come up
	I0920 22:19:59.587325   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:59.587694   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:59.587725   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:59.587654   66271 retry.go:31] will retry after 1.114189927s: waiting for machine to come up
	I0920 22:19:58.663210   64543 crio.go:462] duration metric: took 1.502738525s to copy over tarball
	I0920 22:19:58.663299   64543 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:00.798877   64543 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.135542137s)
	I0920 22:20:00.798912   64543 crio.go:469] duration metric: took 2.135668164s to extract the tarball
	I0920 22:20:00.798922   64543 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:00.838877   64543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:00.884361   64543 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:20:00.884383   64543 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:20:00.884391   64543 kubeadm.go:934] updating node { 192.168.50.193 8443 v1.31.1 crio true true} ...
	I0920 22:20:00.884492   64543 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-562750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:00.884559   64543 ssh_runner.go:195] Run: crio config
	I0920 22:20:00.930933   64543 cni.go:84] Creating CNI manager for ""
	I0920 22:20:00.930987   64543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:00.931000   64543 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:00.931026   64543 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.193 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-562750 NodeName:embed-certs-562750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:20:00.931179   64543 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-562750"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:00.931237   64543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:20:00.941926   64543 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:00.942020   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:00.952488   64543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0920 22:20:00.970386   64543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:00.987879   64543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0920 22:20:01.006743   64543 ssh_runner.go:195] Run: grep 192.168.50.193	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:01.010865   64543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:01.023730   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:01.151649   64543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:01.169768   64543 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750 for IP: 192.168.50.193
	I0920 22:20:01.169804   64543 certs.go:194] generating shared ca certs ...
	I0920 22:20:01.169823   64543 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:01.170011   64543 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:01.170073   64543 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:01.170083   64543 certs.go:256] generating profile certs ...
	I0920 22:20:01.170195   64543 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/client.key
	I0920 22:20:01.170276   64543 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.key.1624b01a
	I0920 22:20:01.170322   64543 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.key
	I0920 22:20:01.170470   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:01.170511   64543 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:01.170524   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:01.170563   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:01.170597   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:01.170625   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:01.170675   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:01.171652   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:01.235898   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:01.276664   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:01.313610   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:01.346451   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 22:20:01.373805   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:20:01.401908   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:01.428305   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:01.460649   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:01.487664   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:01.513122   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:01.539736   64543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:01.559568   64543 ssh_runner.go:195] Run: openssl version
	I0920 22:20:01.565838   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:01.579013   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.584012   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.584090   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.590609   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:01.603863   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:01.616932   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.621762   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.621816   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.627823   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:01.640444   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:01.654236   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.659030   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.659087   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.665104   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:01.678337   64543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:01.683177   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:01.689492   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:01.695839   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:01.702517   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:01.709021   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:01.715624   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:01.721888   64543 kubeadm.go:392] StartCluster: {Name:embed-certs-562750 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:01.721981   64543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:01.722029   64543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:01.771458   64543 cri.go:89] found id: ""
	I0920 22:20:01.771526   64543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:01.785814   64543 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:01.785833   64543 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:01.785887   64543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:01.796192   64543 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:01.797197   64543 kubeconfig.go:125] found "embed-certs-562750" server: "https://192.168.50.193:8443"
	I0920 22:20:01.799185   64543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:01.809313   64543 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.193
	I0920 22:20:01.809345   64543 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:01.809373   64543 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:01.809437   64543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:01.846530   64543 cri.go:89] found id: ""
	I0920 22:20:01.846605   64543 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:01.864522   64543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:01.874792   64543 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:01.874817   64543 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:01.874877   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:20:01.884330   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:01.884401   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:01.893899   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:20:01.903119   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:01.903176   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:01.913163   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:20:01.924161   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:01.924216   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:01.935472   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:20:01.944837   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:01.944916   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:01.955372   64543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:01.966120   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:02.094557   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:00.703843   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:00.704393   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:00.704421   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:00.704320   66271 retry.go:31] will retry after 1.075959086s: waiting for machine to come up
	I0920 22:20:01.781352   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:01.781785   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:01.781815   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:01.781745   66271 retry.go:31] will retry after 1.355153165s: waiting for machine to come up
	I0920 22:20:03.139404   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:03.139900   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:03.139926   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:03.139858   66271 retry.go:31] will retry after 1.818194289s: waiting for machine to come up
	I0920 22:20:04.959670   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:04.960173   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:04.960242   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:04.960146   66271 retry.go:31] will retry after 2.18132962s: waiting for machine to come up
	I0920 22:20:03.143089   64543 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.048498511s)
	I0920 22:20:03.143125   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.364380   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.431644   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.547063   64543 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:03.547154   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.047320   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.548259   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.577405   64543 api_server.go:72] duration metric: took 1.03034068s to wait for apiserver process to appear ...
	I0920 22:20:04.577433   64543 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:20:04.577456   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.359356   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:07.359397   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:07.359415   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.433696   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:07.433729   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:07.578317   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.605834   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:07.605869   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:08.078218   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:08.083148   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:08.083174   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:08.577638   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:08.583009   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:08.583039   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:09.077549   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:09.081990   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 200:
	ok
	I0920 22:20:09.088232   64543 api_server.go:141] control plane version: v1.31.1
	I0920 22:20:09.088256   64543 api_server.go:131] duration metric: took 4.510816425s to wait for apiserver health ...
	I0920 22:20:09.088263   64543 cni.go:84] Creating CNI manager for ""
	I0920 22:20:09.088271   64543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:09.090388   64543 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:20:07.143268   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:07.143767   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:07.143806   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:07.143720   66271 retry.go:31] will retry after 3.514856392s: waiting for machine to come up
	I0920 22:20:09.091750   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:20:09.103171   64543 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:20:09.123617   64543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:20:09.135750   64543 system_pods.go:59] 8 kube-system pods found
	I0920 22:20:09.135778   64543 system_pods.go:61] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:20:09.135790   64543 system_pods.go:61] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:20:09.135800   64543 system_pods.go:61] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:20:09.135809   64543 system_pods.go:61] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:20:09.135815   64543 system_pods.go:61] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:20:09.135838   64543 system_pods.go:61] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:20:09.135843   64543 system_pods.go:61] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:20:09.135851   64543 system_pods.go:61] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:20:09.135864   64543 system_pods.go:74] duration metric: took 12.228085ms to wait for pod list to return data ...
	I0920 22:20:09.135870   64543 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:20:09.140570   64543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:20:09.140598   64543 node_conditions.go:123] node cpu capacity is 2
	I0920 22:20:09.140609   64543 node_conditions.go:105] duration metric: took 4.732339ms to run NodePressure ...
	I0920 22:20:09.140625   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:09.411017   64543 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:20:09.423634   64543 kubeadm.go:739] kubelet initialised
	I0920 22:20:09.423657   64543 kubeadm.go:740] duration metric: took 12.617105ms waiting for restarted kubelet to initialise ...
	I0920 22:20:09.423665   64543 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:09.427934   64543 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.438355   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.438377   64543 pod_ready.go:82] duration metric: took 10.418416ms for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.438386   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.438403   64543 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.447837   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "etcd-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.447868   64543 pod_ready.go:82] duration metric: took 9.456346ms for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.447877   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "etcd-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.447886   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.456304   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.456328   64543 pod_ready.go:82] duration metric: took 8.434885ms for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.456338   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.456344   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.528428   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.528451   64543 pod_ready.go:82] duration metric: took 72.099227ms for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.528460   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.528465   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.927405   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-proxy-wsrnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.927434   64543 pod_ready.go:82] duration metric: took 398.961394ms for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.927443   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-proxy-wsrnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.927449   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:10.327103   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.327128   64543 pod_ready.go:82] duration metric: took 399.672325ms for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:10.327140   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.327153   64543 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:10.727093   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.727121   64543 pod_ready.go:82] duration metric: took 399.958241ms for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:10.727130   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.727137   64543 pod_ready.go:39] duration metric: took 1.303464561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:10.727154   64543 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:20:10.741332   64543 ops.go:34] apiserver oom_adj: -16
	I0920 22:20:10.741356   64543 kubeadm.go:597] duration metric: took 8.955512402s to restartPrimaryControlPlane
	I0920 22:20:10.741367   64543 kubeadm.go:394] duration metric: took 9.019487096s to StartCluster
	I0920 22:20:10.741382   64543 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:10.741459   64543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:20:10.744903   64543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:10.745155   64543 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:20:10.745198   64543 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:20:10.745344   64543 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-562750"
	I0920 22:20:10.745366   64543 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-562750"
	W0920 22:20:10.745375   64543 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:20:10.745376   64543 addons.go:69] Setting metrics-server=true in profile "embed-certs-562750"
	I0920 22:20:10.745398   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.745397   64543 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:10.745409   64543 addons.go:234] Setting addon metrics-server=true in "embed-certs-562750"
	W0920 22:20:10.745419   64543 addons.go:243] addon metrics-server should already be in state true
	I0920 22:20:10.745344   64543 addons.go:69] Setting default-storageclass=true in profile "embed-certs-562750"
	I0920 22:20:10.745451   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.745468   64543 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-562750"
	I0920 22:20:10.745705   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745734   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745773   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.745853   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745943   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.746025   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.747255   64543 out.go:177] * Verifying Kubernetes components...
	I0920 22:20:10.749042   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:10.761119   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0920 22:20:10.761143   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0920 22:20:10.761143   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0920 22:20:10.761517   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.761598   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.761643   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.762025   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762040   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762150   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762157   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762169   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762172   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762352   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762535   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762545   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762611   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.763136   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.763137   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.763182   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.763208   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.766161   64543 addons.go:234] Setting addon default-storageclass=true in "embed-certs-562750"
	W0920 22:20:10.766184   64543 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:20:10.766215   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.766587   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.766645   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.779678   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
	I0920 22:20:10.780153   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.780716   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.780739   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.781116   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.781184   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45417
	I0920 22:20:10.781406   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.781548   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.782023   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.782037   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.782492   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.782496   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0920 22:20:10.782988   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.783141   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.783174   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.783312   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.783458   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.783479   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.783861   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.784058   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.785625   64543 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:10.785682   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.787202   64543 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:20:10.787263   64543 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:20:10.787276   64543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:20:10.787293   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.788800   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:20:10.788820   64543 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:20:10.788838   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.790560   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.791094   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.791126   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.791400   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.791591   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.791735   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.791873   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.792691   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.793146   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.793161   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.793440   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.793640   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.793776   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.793904   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.801231   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36363
	I0920 22:20:10.801612   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.802055   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.802078   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.802442   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.802720   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.804333   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.804544   64543 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:20:10.804561   64543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:20:10.804574   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.807883   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.808403   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.808425   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.808636   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.808809   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.808949   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.809102   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.974334   64543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:10.997919   64543 node_ready.go:35] waiting up to 6m0s for node "embed-certs-562750" to be "Ready" ...
	I0920 22:20:11.061767   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:20:11.079795   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:20:11.079817   64543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:20:11.097101   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:20:11.144947   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:20:11.144971   64543 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:20:11.217680   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:20:11.217704   64543 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:20:11.269307   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:20:12.132770   64543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070956267s)
	I0920 22:20:12.132827   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.132840   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.132840   64543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.035704985s)
	I0920 22:20:12.132882   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.132898   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133151   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133183   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133202   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133217   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.133221   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133225   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133248   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133262   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133285   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.133296   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133440   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133456   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133466   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133523   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133525   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133534   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.139359   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.139373   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.139612   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.139628   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.139643   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177345   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.177367   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.177669   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177689   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.177705   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.177715   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.177724   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.177933   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177974   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.177984   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.177998   64543 addons.go:475] Verifying addon metrics-server=true in "embed-certs-562750"
	I0920 22:20:12.180228   64543 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 22:20:12.181532   64543 addons.go:510] duration metric: took 1.436346895s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 22:20:15.124170   65164 start.go:364] duration metric: took 3m9.152575828s to acquireMachinesLock for "old-k8s-version-593412"
	I0920 22:20:15.124246   65164 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:20:15.124258   65164 fix.go:54] fixHost starting: 
	I0920 22:20:15.124618   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:15.124669   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:15.144438   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0920 22:20:15.144935   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:15.145417   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:20:15.145446   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:15.145757   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:15.145943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:15.146103   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetState
	I0920 22:20:15.147740   65164 fix.go:112] recreateIfNeeded on old-k8s-version-593412: state=Stopped err=<nil>
	I0920 22:20:15.147770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	W0920 22:20:15.147919   65164 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:20:15.150120   65164 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-593412" ...
	I0920 22:20:10.659920   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:10.660290   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:10.660324   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:10.660245   66271 retry.go:31] will retry after 3.009712634s: waiting for machine to come up
	I0920 22:20:13.672538   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.673101   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has current primary IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.673132   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Found IP for machine: 192.168.39.94
	I0920 22:20:13.673144   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Reserving static IP address...
	I0920 22:20:13.673601   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799783", mac: "52:54:00:30:89:df", ip: "192.168.39.94"} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.673657   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | skip adding static IP to network mk-default-k8s-diff-port-799783 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799783", mac: "52:54:00:30:89:df", ip: "192.168.39.94"}
	I0920 22:20:13.673673   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Reserved static IP address: 192.168.39.94
	I0920 22:20:13.673692   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for SSH to be available...
	I0920 22:20:13.673706   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Getting to WaitForSSH function...
	I0920 22:20:13.675591   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.675941   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.675961   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.676120   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Using SSH client type: external
	I0920 22:20:13.676150   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa (-rw-------)
	I0920 22:20:13.676183   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:13.676199   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | About to run SSH command:
	I0920 22:20:13.676212   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | exit 0
	I0920 22:20:13.803369   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:13.803683   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetConfigRaw
	I0920 22:20:13.804303   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:13.806544   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.806902   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.806932   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.807178   64930 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/config.json ...
	I0920 22:20:13.807442   64930 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:13.807465   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:13.807682   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:13.809783   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.810027   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.810049   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.810221   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:13.810472   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.810628   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.810727   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:13.810875   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:13.811088   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:13.811112   64930 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:13.923218   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:13.923249   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:13.923489   64930 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799783"
	I0920 22:20:13.923512   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:13.923657   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:13.926007   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.926408   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.926439   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.926613   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:13.926775   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.926915   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.927071   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:13.927227   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:13.927389   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:13.927401   64930 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799783 && echo "default-k8s-diff-port-799783" | sudo tee /etc/hostname
	I0920 22:20:14.058656   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799783
	
	I0920 22:20:14.058686   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.061487   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.061914   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.061950   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.062157   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.062352   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.062515   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.062675   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.062840   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:14.063061   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:14.063087   64930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799783/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:14.190755   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:14.190785   64930 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:14.190818   64930 buildroot.go:174] setting up certificates
	I0920 22:20:14.190831   64930 provision.go:84] configureAuth start
	I0920 22:20:14.190848   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:14.191114   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:14.194083   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.194504   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.194533   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.194705   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.197104   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.197415   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.197453   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.197649   64930 provision.go:143] copyHostCerts
	I0920 22:20:14.197710   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:14.197733   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:14.197797   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:14.197911   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:14.197924   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:14.197952   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:14.198025   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:14.198035   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:14.198062   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:14.198128   64930 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799783 san=[127.0.0.1 192.168.39.94 default-k8s-diff-port-799783 localhost minikube]
	I0920 22:20:14.450887   64930 provision.go:177] copyRemoteCerts
	I0920 22:20:14.450953   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:14.451010   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.453547   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.453849   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.453873   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.454064   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.454256   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.454424   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.454547   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:14.541380   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:14.566560   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 22:20:14.596742   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:20:14.629406   64930 provision.go:87] duration metric: took 438.556575ms to configureAuth
	I0920 22:20:14.629438   64930 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:14.629656   64930 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:14.629731   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.632710   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.633129   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.633160   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.633403   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.633638   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.633823   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.633968   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.634151   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:14.634380   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:14.634404   64930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:14.870302   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:14.870327   64930 machine.go:96] duration metric: took 1.062871025s to provisionDockerMachine
	I0920 22:20:14.870338   64930 start.go:293] postStartSetup for "default-k8s-diff-port-799783" (driver="kvm2")
	I0920 22:20:14.870348   64930 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:14.870363   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:14.870675   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:14.870703   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.873574   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.873946   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.873990   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.874110   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.874306   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.874460   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.874604   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:14.963054   64930 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:14.967939   64930 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:14.967973   64930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:14.968056   64930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:14.968155   64930 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:14.968292   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:14.978976   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:15.005989   64930 start.go:296] duration metric: took 135.637475ms for postStartSetup
	I0920 22:20:15.006028   64930 fix.go:56] duration metric: took 19.390035006s for fixHost
	I0920 22:20:15.006046   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.008572   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.008972   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.009003   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.009176   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.009355   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.009516   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.009637   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.009793   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:15.009948   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:15.009958   64930 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:15.124032   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870815.095821045
	
	I0920 22:20:15.124057   64930 fix.go:216] guest clock: 1726870815.095821045
	I0920 22:20:15.124067   64930 fix.go:229] Guest: 2024-09-20 22:20:15.095821045 +0000 UTC Remote: 2024-09-20 22:20:15.006032337 +0000 UTC m=+209.748283721 (delta=89.788708ms)
	I0920 22:20:15.124093   64930 fix.go:200] guest clock delta is within tolerance: 89.788708ms
	I0920 22:20:15.124099   64930 start.go:83] releasing machines lock for "default-k8s-diff-port-799783", held for 19.50814158s
	I0920 22:20:15.124126   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.124385   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:15.127159   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.127522   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.127553   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.127724   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128290   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128438   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128550   64930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:15.128593   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.128689   64930 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:15.128707   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.131485   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.131862   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.131934   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.131963   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.132143   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.132343   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.132377   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.132418   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.132561   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.132624   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.132703   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.132789   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:15.132812   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.132967   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:15.239910   64930 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:15.247781   64930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:15.151593   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .Start
	I0920 22:20:15.151807   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring networks are active...
	I0920 22:20:15.152666   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network default is active
	I0920 22:20:15.153017   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network mk-old-k8s-version-593412 is active
	I0920 22:20:15.153520   65164 main.go:141] libmachine: (old-k8s-version-593412) Getting domain xml...
	I0920 22:20:15.154306   65164 main.go:141] libmachine: (old-k8s-version-593412) Creating domain...
	I0920 22:20:15.405701   64930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:15.413454   64930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:15.413546   64930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:15.431606   64930 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:15.431631   64930 start.go:495] detecting cgroup driver to use...
	I0920 22:20:15.431698   64930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:15.450077   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:15.464156   64930 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:15.464229   64930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:15.483489   64930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:15.498873   64930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:15.614354   64930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:15.817412   64930 docker.go:233] disabling docker service ...
	I0920 22:20:15.817495   64930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:15.833184   64930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:15.849521   64930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:15.997696   64930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:16.138329   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:16.153302   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:16.172896   64930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:20:16.172967   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.184383   64930 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:16.184448   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.196869   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.208295   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.220098   64930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:16.231777   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.243180   64930 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.263881   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.275250   64930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:16.291764   64930 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:16.291839   64930 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:16.307082   64930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:16.318243   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:16.459321   64930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:16.563304   64930 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:16.563369   64930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:16.568520   64930 start.go:563] Will wait 60s for crictl version
	I0920 22:20:16.568586   64930 ssh_runner.go:195] Run: which crictl
	I0920 22:20:16.573573   64930 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:16.617596   64930 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:16.617692   64930 ssh_runner.go:195] Run: crio --version
	I0920 22:20:16.656700   64930 ssh_runner.go:195] Run: crio --version
	I0920 22:20:16.691226   64930 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:20:13.001784   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:15.502542   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:16.692645   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:16.695792   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:16.696162   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:16.696189   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:16.696457   64930 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:16.700827   64930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:16.713577   64930 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-799783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:16.713724   64930 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:20:16.713798   64930 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:16.753538   64930 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:20:16.753621   64930 ssh_runner.go:195] Run: which lz4
	I0920 22:20:16.758261   64930 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:20:16.763234   64930 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:20:16.763282   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 22:20:18.277256   64930 crio.go:462] duration metric: took 1.519010373s to copy over tarball
	I0920 22:20:18.277326   64930 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:16.512533   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting to get IP...
	I0920 22:20:16.513609   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.514030   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.514105   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.514023   66497 retry.go:31] will retry after 283.450738ms: waiting for machine to come up
	I0920 22:20:16.799590   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.800093   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.800125   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.800032   66497 retry.go:31] will retry after 325.332719ms: waiting for machine to come up
	I0920 22:20:17.127313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.127784   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.127809   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.127762   66497 retry.go:31] will retry after 407.300726ms: waiting for machine to come up
	I0920 22:20:17.537352   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.537853   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.537884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.537804   66497 retry.go:31] will retry after 376.483945ms: waiting for machine to come up
	I0920 22:20:17.916227   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.916679   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.916706   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.916646   66497 retry.go:31] will retry after 471.842767ms: waiting for machine to come up
	I0920 22:20:18.390326   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:18.390817   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:18.390862   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:18.390788   66497 retry.go:31] will retry after 897.259517ms: waiting for machine to come up
	I0920 22:20:19.289834   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:19.290347   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:19.290376   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:19.290289   66497 retry.go:31] will retry after 1.172691106s: waiting for machine to come up
	I0920 22:20:20.464050   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:20.464487   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:20.464509   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:20.464443   66497 retry.go:31] will retry after 1.205371228s: waiting for machine to come up
	I0920 22:20:17.502934   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:18.003595   64543 node_ready.go:49] node "embed-certs-562750" has status "Ready":"True"
	I0920 22:20:18.003623   64543 node_ready.go:38] duration metric: took 7.005667163s for node "embed-certs-562750" to be "Ready" ...
	I0920 22:20:18.003635   64543 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:18.018682   64543 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:20.027253   64543 pod_ready.go:103] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:20.404573   64930 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127218766s)
	I0920 22:20:20.404602   64930 crio.go:469] duration metric: took 2.127314313s to extract the tarball
	I0920 22:20:20.404613   64930 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:20.443060   64930 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:20.486174   64930 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:20:20.486200   64930 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:20:20.486210   64930 kubeadm.go:934] updating node { 192.168.39.94 8444 v1.31.1 crio true true} ...
	I0920 22:20:20.486327   64930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-799783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:20.486422   64930 ssh_runner.go:195] Run: crio config
	I0920 22:20:20.539117   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:20:20.539141   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:20.539151   64930 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:20.539184   64930 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-799783 NodeName:default-k8s-diff-port-799783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:20:20.539375   64930 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-799783"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:20.539450   64930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:20:20.552286   64930 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:20.552358   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:20.563964   64930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0920 22:20:20.582857   64930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:20.601602   64930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0920 22:20:20.619624   64930 ssh_runner.go:195] Run: grep 192.168.39.94	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:20.623667   64930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:20.636950   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:20.758158   64930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:20.776263   64930 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783 for IP: 192.168.39.94
	I0920 22:20:20.776291   64930 certs.go:194] generating shared ca certs ...
	I0920 22:20:20.776307   64930 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:20.776465   64930 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:20.776529   64930 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:20.776543   64930 certs.go:256] generating profile certs ...
	I0920 22:20:20.776644   64930 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.key
	I0920 22:20:20.776722   64930 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.key.a6c75af2
	I0920 22:20:20.776787   64930 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.key
	I0920 22:20:20.776952   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:20.776999   64930 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:20.777013   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:20.777050   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:20.777083   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:20.777121   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:20.777174   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:20.777796   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:20.829458   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:20.879008   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:20.914899   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:20.945905   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 22:20:20.981144   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:20:21.011815   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:21.037214   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:21.062338   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:21.088458   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:21.115033   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:21.138611   64930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:21.156547   64930 ssh_runner.go:195] Run: openssl version
	I0920 22:20:21.162710   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:21.175725   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.180671   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.180733   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.186897   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:21.199980   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:21.212840   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.217656   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.217710   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.223561   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:21.236627   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:21.249706   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.254627   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.254693   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.260555   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:21.272330   64930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:21.277146   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:21.283637   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:21.289555   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:21.296395   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:21.302879   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:21.309163   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:21.315778   64930 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-799783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:21.315890   64930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:21.315953   64930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:21.358833   64930 cri.go:89] found id: ""
	I0920 22:20:21.358895   64930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:21.370044   64930 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:21.370067   64930 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:21.370110   64930 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:21.380782   64930 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:21.381858   64930 kubeconfig.go:125] found "default-k8s-diff-port-799783" server: "https://192.168.39.94:8444"
	I0920 22:20:21.383864   64930 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:21.394623   64930 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.94
	I0920 22:20:21.394659   64930 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:21.394692   64930 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:21.394781   64930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:21.434763   64930 cri.go:89] found id: ""
	I0920 22:20:21.434837   64930 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:21.452364   64930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:21.463047   64930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:21.463069   64930 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:21.463120   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 22:20:21.473382   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:21.473435   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:21.483948   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 22:20:21.494051   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:21.494116   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:21.504392   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 22:20:21.514513   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:21.514586   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:21.525767   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 22:20:21.535959   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:21.536042   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:21.546544   64930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:21.559961   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:21.700177   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.039526   64930 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.339307973s)
	I0920 22:20:23.039563   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.240821   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.322946   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.419008   64930 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:23.419163   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:23.919775   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:24.419574   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:24.458386   64930 api_server.go:72] duration metric: took 1.039374058s to wait for apiserver process to appear ...
	I0920 22:20:24.458431   64930 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:20:24.458457   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:24.459045   64930 api_server.go:269] stopped: https://192.168.39.94:8444/healthz: Get "https://192.168.39.94:8444/healthz": dial tcp 192.168.39.94:8444: connect: connection refused
	I0920 22:20:24.958794   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:21.671904   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:21.672382   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:21.672412   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:21.672336   66497 retry.go:31] will retry after 1.261816036s: waiting for machine to come up
	I0920 22:20:22.935576   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:22.936165   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:22.936194   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:22.936101   66497 retry.go:31] will retry after 2.160717711s: waiting for machine to come up
	I0920 22:20:25.098788   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:25.099242   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:25.099264   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:25.099201   66497 retry.go:31] will retry after 2.399420445s: waiting for machine to come up
	I0920 22:20:22.764833   64543 pod_ready.go:103] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:25.030167   64543 pod_ready.go:93] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.030195   64543 pod_ready.go:82] duration metric: took 7.011481892s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.030207   64543 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.039905   64543 pod_ready.go:93] pod "etcd-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.039987   64543 pod_ready.go:82] duration metric: took 9.771571ms for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.040012   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.049855   64543 pod_ready.go:93] pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.049880   64543 pod_ready.go:82] duration metric: took 9.849676ms for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.049891   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.056105   64543 pod_ready.go:93] pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.056128   64543 pod_ready.go:82] duration metric: took 6.228132ms for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.056139   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.061206   64543 pod_ready.go:93] pod "kube-proxy-wsrnw" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.061227   64543 pod_ready.go:82] duration metric: took 5.080531ms for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.061239   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.423902   64543 pod_ready.go:93] pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.423928   64543 pod_ready.go:82] duration metric: took 362.681392ms for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.423941   64543 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:27.432293   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:27.551115   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:27.551160   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:27.551182   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:27.582342   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:27.582378   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:27.958626   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:27.964667   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:27.964700   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:28.459321   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:28.463795   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:28.463829   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:28.959398   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:28.965916   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 200:
	ok
	I0920 22:20:28.978561   64930 api_server.go:141] control plane version: v1.31.1
	I0920 22:20:28.978588   64930 api_server.go:131] duration metric: took 4.520149944s to wait for apiserver health ...
	I0920 22:20:28.978597   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:20:28.978612   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:28.980359   64930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:20:28.981786   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:20:28.999040   64930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:20:29.044999   64930 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:20:29.061533   64930 system_pods.go:59] 8 kube-system pods found
	I0920 22:20:29.061578   64930 system_pods.go:61] "coredns-7c65d6cfc9-nl65c" [a5d6f5c6-aeef-4f0e-8a96-dcddc7d261d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:20:29.061592   64930 system_pods.go:61] "etcd-default-k8s-diff-port-799783" [e23bc34a-f559-434e-bc61-30f010efe60b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:20:29.061603   64930 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799783" [acbaa020-f0e3-4e35-a4ab-28de39620eee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:20:29.061610   64930 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799783" [4f08a0c8-9f92-44ba-a0ae-3f61f50bad54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:20:29.061620   64930 system_pods.go:61] "kube-proxy-mqkfs" [8fe12e1f-7fdc-4bfc-8d11-3ef08212c17c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 22:20:29.061633   64930 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799783" [8373aa94-7b16-4f5a-ab26-bf778e76cecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:20:29.061641   64930 system_pods.go:61] "metrics-server-6867b74b74-4xsdw" [663691e2-a038-4ba4-b0ee-d7e1e88b9c05] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:20:29.061652   64930 system_pods.go:61] "storage-provisioner" [95a6ef93-5fdd-465b-8d34-7b97d56af757] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 22:20:29.061662   64930 system_pods.go:74] duration metric: took 16.629791ms to wait for pod list to return data ...
	I0920 22:20:29.061674   64930 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:20:29.066297   64930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:20:29.066329   64930 node_conditions.go:123] node cpu capacity is 2
	I0920 22:20:29.066341   64930 node_conditions.go:105] duration metric: took 4.659117ms to run NodePressure ...
	I0920 22:20:29.066360   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:29.337598   64930 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:20:29.345606   64930 kubeadm.go:739] kubelet initialised
	I0920 22:20:29.345637   64930 kubeadm.go:740] duration metric: took 8.0125ms waiting for restarted kubelet to initialise ...
	I0920 22:20:29.345646   64930 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:29.353710   64930 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.359468   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.359507   64930 pod_ready.go:82] duration metric: took 5.763208ms for pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.359518   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.359529   64930 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.365960   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.365994   64930 pod_ready.go:82] duration metric: took 6.454066ms for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.366006   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.366023   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.372228   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.372258   64930 pod_ready.go:82] duration metric: took 6.225774ms for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.372270   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.372278   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.449937   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.449964   64930 pod_ready.go:82] duration metric: took 77.677417ms for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.449975   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.449981   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mqkfs" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.849933   64930 pod_ready.go:93] pod "kube-proxy-mqkfs" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:29.849957   64930 pod_ready.go:82] duration metric: took 399.969442ms for pod "kube-proxy-mqkfs" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.849971   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:27.500734   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:27.501210   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:27.501250   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:27.501187   66497 retry.go:31] will retry after 2.779302945s: waiting for machine to come up
	I0920 22:20:30.282179   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:30.282663   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:30.282693   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:30.282632   66497 retry.go:31] will retry after 3.57604454s: waiting for machine to come up
	I0920 22:20:29.930268   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:31.930604   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:35.104095   64076 start.go:364] duration metric: took 54.263062742s to acquireMachinesLock for "no-preload-032884"
	I0920 22:20:35.104165   64076 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:20:35.104176   64076 fix.go:54] fixHost starting: 
	I0920 22:20:35.104530   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:35.104559   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:35.121905   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0920 22:20:35.122295   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:35.122754   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:20:35.122775   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:35.123184   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:35.123393   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:35.123553   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:20:35.125092   64076 fix.go:112] recreateIfNeeded on no-preload-032884: state=Stopped err=<nil>
	I0920 22:20:35.125113   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	W0920 22:20:35.125267   64076 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:20:35.127162   64076 out.go:177] * Restarting existing kvm2 VM for "no-preload-032884" ...
	I0920 22:20:31.857961   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:34.358409   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:33.859962   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860480   65164 main.go:141] libmachine: (old-k8s-version-593412) Found IP for machine: 192.168.61.2
	I0920 22:20:33.860510   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has current primary IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860519   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserving static IP address...
	I0920 22:20:33.860912   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.860939   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | skip adding static IP to network mk-old-k8s-version-593412 - found existing host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"}
	I0920 22:20:33.860953   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserved static IP address: 192.168.61.2
	I0920 22:20:33.860970   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting for SSH to be available...
	I0920 22:20:33.860982   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Getting to WaitForSSH function...
	I0920 22:20:33.863316   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863665   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.863690   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863814   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH client type: external
	I0920 22:20:33.863852   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa (-rw-------)
	I0920 22:20:33.863884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:33.863898   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | About to run SSH command:
	I0920 22:20:33.863916   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | exit 0
	I0920 22:20:33.987115   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:33.987448   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetConfigRaw
	I0920 22:20:33.988170   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:33.990787   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991200   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.991232   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991450   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:20:33.991659   65164 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:33.991677   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:33.991880   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:33.994313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994683   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.994715   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994911   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:33.995094   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995194   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995344   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:33.995459   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:33.995641   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:33.995651   65164 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:34.104874   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:34.104900   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105125   65164 buildroot.go:166] provisioning hostname "old-k8s-version-593412"
	I0920 22:20:34.105140   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105271   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.108213   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108564   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.108595   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108754   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.108946   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109088   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109218   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.109384   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.109607   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.109624   65164 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-593412 && echo "old-k8s-version-593412" | sudo tee /etc/hostname
	I0920 22:20:34.236515   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-593412
	
	I0920 22:20:34.236547   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.239440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.239825   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239983   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.240211   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240372   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240534   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.240753   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.240978   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.241002   65164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-593412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-593412/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-593412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:34.364324   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:34.364351   65164 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:34.364401   65164 buildroot.go:174] setting up certificates
	I0920 22:20:34.364413   65164 provision.go:84] configureAuth start
	I0920 22:20:34.364426   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.364641   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:34.367650   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368065   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.368123   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368260   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.371062   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.371470   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371621   65164 provision.go:143] copyHostCerts
	I0920 22:20:34.371684   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:34.371699   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:34.371756   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:34.371859   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:34.371869   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:34.371889   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:34.371941   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:34.371949   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:34.371966   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:34.372024   65164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-593412 san=[127.0.0.1 192.168.61.2 localhost minikube old-k8s-version-593412]
	I0920 22:20:34.456453   65164 provision.go:177] copyRemoteCerts
	I0920 22:20:34.456512   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:34.456538   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.459471   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459810   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.459871   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459996   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.460225   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.460390   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.460538   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.547105   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:34.576231   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 22:20:34.603240   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 22:20:34.629295   65164 provision.go:87] duration metric: took 264.869688ms to configureAuth
	I0920 22:20:34.629320   65164 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:34.629521   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:20:34.629601   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.632577   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633006   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.633035   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633191   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.633391   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633552   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633682   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.633865   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.634029   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.634043   65164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:34.870369   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:34.870393   65164 machine.go:96] duration metric: took 878.72152ms to provisionDockerMachine
	I0920 22:20:34.870404   65164 start.go:293] postStartSetup for "old-k8s-version-593412" (driver="kvm2")
	I0920 22:20:34.870414   65164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:34.870446   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:34.870739   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:34.870770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.873626   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874069   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.874098   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874262   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.874477   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.874607   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.874731   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.958510   65164 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:34.962678   65164 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:34.962700   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:34.962769   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:34.962873   65164 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:34.963010   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:34.973114   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:34.997799   65164 start.go:296] duration metric: took 127.379618ms for postStartSetup
	I0920 22:20:34.997837   65164 fix.go:56] duration metric: took 19.87358027s for fixHost
	I0920 22:20:34.997861   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.000420   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000761   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.000786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000953   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.001142   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001332   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001492   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.001694   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:35.001900   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:35.001914   65164 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:35.103956   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870835.076325315
	
	I0920 22:20:35.103980   65164 fix.go:216] guest clock: 1726870835.076325315
	I0920 22:20:35.103991   65164 fix.go:229] Guest: 2024-09-20 22:20:35.076325315 +0000 UTC Remote: 2024-09-20 22:20:34.997841315 +0000 UTC m=+209.171100575 (delta=78.484ms)
	I0920 22:20:35.104021   65164 fix.go:200] guest clock delta is within tolerance: 78.484ms
	I0920 22:20:35.104031   65164 start.go:83] releasing machines lock for "old-k8s-version-593412", held for 19.979809523s
	I0920 22:20:35.104060   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.104327   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:35.107075   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107426   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.107463   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107619   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108087   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108221   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108323   65164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:35.108377   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.108430   65164 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:35.108455   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.111005   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111107   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111377   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111413   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111456   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111584   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111638   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111760   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111795   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111939   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.111943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.189592   65164 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:35.213527   65164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:35.364081   65164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:35.370551   65164 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:35.370621   65164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:35.388107   65164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:35.388135   65164 start.go:495] detecting cgroup driver to use...
	I0920 22:20:35.388206   65164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:35.410531   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:35.425842   65164 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:35.425902   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:35.441753   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:35.457160   65164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:35.578452   65164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:35.754567   65164 docker.go:233] disabling docker service ...
	I0920 22:20:35.754653   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:35.772241   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:35.788874   65164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:35.911696   65164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:36.050305   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:36.065694   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:36.085619   65164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 22:20:36.085697   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.096527   65164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:36.096599   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.108150   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.121282   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.133335   65164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:36.145549   65164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:36.156370   65164 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:36.156429   65164 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:36.174821   65164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:36.188579   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:36.355272   65164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:36.460222   65164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:36.460280   65164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:36.466413   65164 start.go:563] Will wait 60s for crictl version
	I0920 22:20:36.466470   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:36.476263   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:36.519969   65164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:36.520061   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.551914   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.584210   65164 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 22:20:33.931343   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:36.430359   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:35.128483   64076 main.go:141] libmachine: (no-preload-032884) Calling .Start
	I0920 22:20:35.128679   64076 main.go:141] libmachine: (no-preload-032884) Ensuring networks are active...
	I0920 22:20:35.129407   64076 main.go:141] libmachine: (no-preload-032884) Ensuring network default is active
	I0920 22:20:35.129723   64076 main.go:141] libmachine: (no-preload-032884) Ensuring network mk-no-preload-032884 is active
	I0920 22:20:35.130101   64076 main.go:141] libmachine: (no-preload-032884) Getting domain xml...
	I0920 22:20:35.130878   64076 main.go:141] libmachine: (no-preload-032884) Creating domain...
	I0920 22:20:36.466500   64076 main.go:141] libmachine: (no-preload-032884) Waiting to get IP...
	I0920 22:20:36.467609   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:36.468084   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:36.468162   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:36.468053   66666 retry.go:31] will retry after 296.142224ms: waiting for machine to come up
	I0920 22:20:36.765402   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:36.765985   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:36.766019   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:36.765907   66666 retry.go:31] will retry after 264.44324ms: waiting for machine to come up
	I0920 22:20:37.032449   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.032913   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.032983   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.032871   66666 retry.go:31] will retry after 366.351809ms: waiting for machine to come up
	I0920 22:20:37.401339   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.401912   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.401942   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.401867   66666 retry.go:31] will retry after 447.432413ms: waiting for machine to come up
	I0920 22:20:37.851659   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.852267   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.852292   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.852177   66666 retry.go:31] will retry after 547.737945ms: waiting for machine to come up
	I0920 22:20:36.858146   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:38.863304   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:36.585309   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:36.588169   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588579   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:36.588608   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588830   65164 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:36.593202   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:36.608192   65164 kubeadm.go:883] updating cluster {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:36.608325   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:20:36.608384   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:36.667481   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:36.667551   65164 ssh_runner.go:195] Run: which lz4
	I0920 22:20:36.672136   65164 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:20:36.677043   65164 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:20:36.677075   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 22:20:38.446678   65164 crio.go:462] duration metric: took 1.774616281s to copy over tarball
	I0920 22:20:38.446757   65164 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:38.433307   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:40.932109   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:38.401406   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:38.401845   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:38.401872   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:38.401813   66666 retry.go:31] will retry after 768.751475ms: waiting for machine to come up
	I0920 22:20:39.171763   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:39.172262   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:39.172306   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:39.172196   66666 retry.go:31] will retry after 743.411926ms: waiting for machine to come up
	I0920 22:20:39.917169   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:39.917764   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:39.917788   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:39.917713   66666 retry.go:31] will retry after 962.106927ms: waiting for machine to come up
	I0920 22:20:40.881256   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:40.881826   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:40.881853   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:40.881761   66666 retry.go:31] will retry after 1.153833019s: waiting for machine to come up
	I0920 22:20:42.036986   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:42.037484   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:42.037507   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:42.037435   66666 retry.go:31] will retry after 2.265359794s: waiting for machine to come up
	I0920 22:20:41.358624   64930 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:41.358656   64930 pod_ready.go:82] duration metric: took 11.508676383s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:41.358675   64930 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:43.367487   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:41.624103   65164 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177314792s)
	I0920 22:20:41.624140   65164 crio.go:469] duration metric: took 3.177430277s to extract the tarball
	I0920 22:20:41.624150   65164 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:41.682195   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:41.728925   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:41.728954   65164 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:20:41.729037   65164 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.729082   65164 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 22:20:41.729130   65164 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.729039   65164 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.729053   65164 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.729064   65164 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731145   65164 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.731168   65164 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 22:20:41.731417   65164 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731618   65164 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.731649   65164 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.731764   65164 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.731827   65164 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.731883   65164 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.894612   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.895771   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.903674   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.909467   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.910830   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.926607   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.943477   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 22:20:42.011191   65164 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 22:20:42.011242   65164 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.011318   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.050009   65164 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 22:20:42.050293   65164 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.050375   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.093205   65164 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 22:20:42.093259   65164 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.093317   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106609   65164 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 22:20:42.106652   65164 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.106701   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106747   65164 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 22:20:42.106832   65164 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.106881   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106699   65164 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 22:20:42.106936   65164 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.107028   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117272   65164 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 22:20:42.117381   65164 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 22:20:42.117404   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.117463   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.117496   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.117427   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117331   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.117556   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.117617   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.261915   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.262025   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.261930   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.262132   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.262186   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.262224   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.262299   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.416018   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.416073   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.416134   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.437099   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.437281   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.437389   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.437543   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.498530   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:42.617564   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 22:20:42.617565   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 22:20:42.617665   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.617767   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 22:20:42.629455   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 22:20:42.629553   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 22:20:42.629638   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 22:20:42.769494   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 22:20:42.769571   65164 cache_images.go:92] duration metric: took 1.040600304s to LoadCachedImages
	W0920 22:20:42.769670   65164 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0920 22:20:42.769690   65164 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.20.0 crio true true} ...
	I0920 22:20:42.769834   65164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-593412 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:42.769930   65164 ssh_runner.go:195] Run: crio config
	I0920 22:20:42.826657   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:20:42.826684   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:42.826695   65164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:42.826713   65164 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-593412 NodeName:old-k8s-version-593412 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 22:20:42.826912   65164 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-593412"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:42.827017   65164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 22:20:42.836906   65164 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:42.836972   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:42.846776   65164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0920 22:20:42.867946   65164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:42.888646   65164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0920 22:20:42.908587   65164 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:42.913022   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:42.927492   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:43.074355   65164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:43.093712   65164 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412 for IP: 192.168.61.2
	I0920 22:20:43.093747   65164 certs.go:194] generating shared ca certs ...
	I0920 22:20:43.093768   65164 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.093938   65164 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:43.094003   65164 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:43.094016   65164 certs.go:256] generating profile certs ...
	I0920 22:20:43.094116   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.key
	I0920 22:20:43.094174   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key.2074c1da
	I0920 22:20:43.094287   65164 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key
	I0920 22:20:43.094440   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:43.094482   65164 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:43.094503   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:43.094532   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:43.094556   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:43.094575   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:43.094615   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:43.095441   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:43.139315   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:43.184809   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:43.224959   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:43.261692   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 22:20:43.304910   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:20:43.350211   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:43.383294   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:43.419400   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:43.450334   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:43.483749   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:43.514957   65164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:43.539734   65164 ssh_runner.go:195] Run: openssl version
	I0920 22:20:43.548836   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:43.564773   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571841   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571904   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.580432   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:43.596782   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:43.613747   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620172   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620241   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.628316   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:43.644084   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:43.656062   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661620   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661687   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.668857   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:43.681747   65164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:43.686842   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:43.694119   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:43.701308   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:43.710152   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:43.718080   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:43.725573   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:43.733563   65164 kubeadm.go:392] StartCluster: {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:43.733676   65164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:43.733753   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.779321   65164 cri.go:89] found id: ""
	I0920 22:20:43.779414   65164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:43.792290   65164 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:43.792308   65164 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:43.792354   65164 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:43.804754   65164 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:43.805793   65164 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-593412" does not appear in /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:20:43.807347   65164 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-9614/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-593412" cluster setting kubeconfig missing "old-k8s-version-593412" context setting]
	I0920 22:20:43.809671   65164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.812323   65164 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:43.824013   65164 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I0920 22:20:43.824057   65164 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:43.824072   65164 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:43.824154   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.866570   65164 cri.go:89] found id: ""
	I0920 22:20:43.866647   65164 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:43.885113   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:43.895941   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:43.895964   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:43.896070   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:20:43.906350   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:43.906427   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:43.917506   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:20:43.927802   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:43.927873   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:43.938454   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.949394   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:43.949459   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.960239   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:20:43.971275   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:43.971343   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:43.981773   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:43.992415   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:44.128979   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.239452   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.110430813s)
	I0920 22:20:45.239495   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.508589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.628021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.731122   65164 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:45.731219   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:42.960716   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:45.431334   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:47.431981   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:44.306025   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:44.306388   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:44.306418   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:44.306362   66666 retry.go:31] will retry after 1.811972294s: waiting for machine to come up
	I0920 22:20:46.119688   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:46.120257   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:46.120281   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:46.120206   66666 retry.go:31] will retry after 2.353824362s: waiting for machine to come up
	I0920 22:20:45.865875   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:48.366477   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:46.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:46.732091   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.232177   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.731967   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.231957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.731560   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.232211   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.731851   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.732174   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.930717   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:51.931624   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:48.476137   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:48.476630   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:48.476659   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:48.476585   66666 retry.go:31] will retry after 3.920442887s: waiting for machine to come up
	I0920 22:20:52.400058   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:52.400449   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:52.400471   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:52.400426   66666 retry.go:31] will retry after 3.798500794s: waiting for machine to come up
	I0920 22:20:50.865475   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:53.366485   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:51.231352   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:51.731345   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.231745   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.731622   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.231749   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.231457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.732036   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.232322   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.732142   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.430801   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.931991   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.200060   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.200518   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has current primary IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.200541   64076 main.go:141] libmachine: (no-preload-032884) Found IP for machine: 192.168.72.19
	I0920 22:20:56.200550   64076 main.go:141] libmachine: (no-preload-032884) Reserving static IP address...
	I0920 22:20:56.200939   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "no-preload-032884", mac: "52:54:00:a6:6f:3c", ip: "192.168.72.19"} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.200983   64076 main.go:141] libmachine: (no-preload-032884) DBG | skip adding static IP to network mk-no-preload-032884 - found existing host DHCP lease matching {name: "no-preload-032884", mac: "52:54:00:a6:6f:3c", ip: "192.168.72.19"}
	I0920 22:20:56.200999   64076 main.go:141] libmachine: (no-preload-032884) Reserved static IP address: 192.168.72.19
	I0920 22:20:56.201014   64076 main.go:141] libmachine: (no-preload-032884) Waiting for SSH to be available...
	I0920 22:20:56.201042   64076 main.go:141] libmachine: (no-preload-032884) DBG | Getting to WaitForSSH function...
	I0920 22:20:56.203351   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.203663   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.203695   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.203770   64076 main.go:141] libmachine: (no-preload-032884) DBG | Using SSH client type: external
	I0920 22:20:56.203790   64076 main.go:141] libmachine: (no-preload-032884) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa (-rw-------)
	I0920 22:20:56.203820   64076 main.go:141] libmachine: (no-preload-032884) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:56.203830   64076 main.go:141] libmachine: (no-preload-032884) DBG | About to run SSH command:
	I0920 22:20:56.203844   64076 main.go:141] libmachine: (no-preload-032884) DBG | exit 0
	I0920 22:20:56.331498   64076 main.go:141] libmachine: (no-preload-032884) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:56.331905   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetConfigRaw
	I0920 22:20:56.333047   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:56.335830   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.336198   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.336230   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.336462   64076 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/config.json ...
	I0920 22:20:56.336687   64076 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:56.336708   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:56.336901   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.339784   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.340171   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.340198   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.340373   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.340598   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.340754   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.340901   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.341056   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.341274   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.341295   64076 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:56.451538   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:56.451567   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.451783   64076 buildroot.go:166] provisioning hostname "no-preload-032884"
	I0920 22:20:56.451813   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.451994   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.454378   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.454747   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.454777   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.454905   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.455093   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.455216   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.455344   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.455549   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.455768   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.455791   64076 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-032884 && echo "no-preload-032884" | sudo tee /etc/hostname
	I0920 22:20:56.577189   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-032884
	
	I0920 22:20:56.577218   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.580006   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.580396   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.580419   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.580637   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.580821   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.580981   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.581093   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.581236   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.581447   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.581472   64076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-032884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-032884/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-032884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:56.697182   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:56.697211   64076 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:56.697237   64076 buildroot.go:174] setting up certificates
	I0920 22:20:56.697250   64076 provision.go:84] configureAuth start
	I0920 22:20:56.697268   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.697530   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:56.700284   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.700638   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.700669   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.700828   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.703088   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.703460   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.703490   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.703557   64076 provision.go:143] copyHostCerts
	I0920 22:20:56.703638   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:56.703658   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:56.703729   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:56.703846   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:56.703856   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:56.703888   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:56.703976   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:56.703986   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:56.704014   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:56.704100   64076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.no-preload-032884 san=[127.0.0.1 192.168.72.19 localhost minikube no-preload-032884]
	I0920 22:20:56.866984   64076 provision.go:177] copyRemoteCerts
	I0920 22:20:56.867061   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:56.867092   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.869468   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.869727   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.869753   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.869967   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.870146   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.870335   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.870460   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:56.957699   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 22:20:56.983450   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:20:57.010074   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:57.036805   64076 provision.go:87] duration metric: took 339.53481ms to configureAuth
	I0920 22:20:57.036831   64076 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:57.036994   64076 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:57.037101   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.040022   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.040443   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.040474   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.040688   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.040870   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.041024   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.041155   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.041351   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:57.041575   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:57.041597   64076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:57.264274   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:57.264302   64076 machine.go:96] duration metric: took 927.599257ms to provisionDockerMachine
	I0920 22:20:57.264315   64076 start.go:293] postStartSetup for "no-preload-032884" (driver="kvm2")
	I0920 22:20:57.264328   64076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:57.264353   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.264656   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:57.264689   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.267783   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.268229   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.268267   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.268455   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.268639   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.268786   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.268927   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.353832   64076 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:57.358202   64076 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:57.358226   64076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:57.358289   64076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:57.358364   64076 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:57.358445   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:57.368387   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:57.393307   64076 start.go:296] duration metric: took 128.976035ms for postStartSetup
	I0920 22:20:57.393353   64076 fix.go:56] duration metric: took 22.289177312s for fixHost
	I0920 22:20:57.393377   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.396200   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.396566   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.396586   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.396760   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.396953   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.397106   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.397220   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.397357   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:57.397563   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:57.397578   64076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:57.508079   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870857.462716254
	
	I0920 22:20:57.508102   64076 fix.go:216] guest clock: 1726870857.462716254
	I0920 22:20:57.508111   64076 fix.go:229] Guest: 2024-09-20 22:20:57.462716254 +0000 UTC Remote: 2024-09-20 22:20:57.393358015 +0000 UTC m=+359.143310428 (delta=69.358239ms)
	I0920 22:20:57.508136   64076 fix.go:200] guest clock delta is within tolerance: 69.358239ms
	I0920 22:20:57.508143   64076 start.go:83] releasing machines lock for "no-preload-032884", held for 22.404002843s
	I0920 22:20:57.508179   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.508481   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:57.511316   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.511660   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.511685   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.511832   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512291   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512438   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512499   64076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:57.512558   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.512687   64076 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:57.512714   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.515494   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515674   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515884   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.515921   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515962   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.516002   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.516189   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.516287   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.516370   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.516459   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.516509   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.516578   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.516639   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.516667   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.596100   64076 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:57.621437   64076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:57.765920   64076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:57.772259   64076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:57.772330   64076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:57.790230   64076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:57.790257   64076 start.go:495] detecting cgroup driver to use...
	I0920 22:20:57.790341   64076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:57.808916   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:57.824120   64076 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:57.824175   64076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:57.838655   64076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:57.853621   64076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:57.966670   64076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:58.108386   64076 docker.go:233] disabling docker service ...
	I0920 22:20:58.108456   64076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:58.124306   64076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:58.138341   64076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:58.292771   64076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:58.415026   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:58.431905   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:58.451166   64076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:20:58.451227   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.462234   64076 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:58.462319   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.473548   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.484592   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.495498   64076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:58.506876   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.517776   64076 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.536223   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.548104   64076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:58.558182   64076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:58.558258   64076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:58.573018   64076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:58.584883   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:58.715520   64076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:58.810495   64076 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:58.810565   64076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:58.815745   64076 start.go:563] Will wait 60s for crictl version
	I0920 22:20:58.815808   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:58.819904   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:58.866227   64076 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:58.866299   64076 ssh_runner.go:195] Run: crio --version
	I0920 22:20:58.895897   64076 ssh_runner.go:195] Run: crio --version
	I0920 22:20:58.930268   64076 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:20:55.865946   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:58.365557   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.231551   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:56.732184   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.231519   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.732306   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.231766   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.232250   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.731295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.732178   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.431393   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:01.930337   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:58.931424   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:58.934010   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:58.934261   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:58.934296   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:58.934479   64076 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:58.938795   64076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:58.952249   64076 kubeadm.go:883] updating cluster {Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:58.952359   64076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:20:58.952390   64076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:58.990874   64076 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:20:58.990895   64076 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:20:58.991004   64076 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:58.991004   64076 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:58.991042   64076 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:58.991051   64076 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 22:20:58.991027   64076 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:58.991070   64076 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:58.991094   64076 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:58.991320   64076 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:58.992446   64076 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:58.992445   64076 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:58.992529   64076 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:58.992537   64076 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:58.992888   64076 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.160630   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.162022   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.175945   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.176874   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.206536   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.221391   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.233905   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 22:20:59.279521   64076 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 22:20:59.279570   64076 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.279592   64076 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 22:20:59.279618   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.279624   64076 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.279667   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.320755   64076 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 22:20:59.320802   64076 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.320850   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.321041   64076 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 22:20:59.321076   64076 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.321121   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.339294   64076 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 22:20:59.339346   64076 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.339406   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.368217   64076 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 22:20:59.368271   64076 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.368321   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.462251   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.462299   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.462353   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.462395   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.462450   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.462477   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.603415   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.603435   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.603559   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.603611   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.603653   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.603690   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.740308   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.740311   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.740391   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.756577   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.756670   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.756711   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.825056   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:59.886185   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 22:20:59.886247   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 22:20:59.886320   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.886335   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:20:59.886200   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 22:20:59.886437   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:20:59.905891   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 22:20:59.905934   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 22:20:59.906012   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:20:59.906032   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 22:20:59.906037   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:20:59.906105   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:20:59.944041   64076 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 22:20:59.944085   64076 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:59.944092   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 22:20:59.944109   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.944115   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 22:20:59.944124   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.944156   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.944192   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 22:20:59.944232   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 22:20:59.944290   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 22:20:59.944305   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 22:21:02.634084   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.689904652s)
	I0920 22:21:02.634115   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 22:21:02.634134   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:21:02.634156   64076 ssh_runner.go:235] Completed: which crictl: (2.690013191s)
	I0920 22:21:02.634170   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:21:02.634206   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:00.365987   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:02.366056   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:04.865138   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:01.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:01.731557   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.232222   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.732161   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.231472   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.732256   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.231599   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.731956   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.231815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.731307   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.931240   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:05.932280   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:04.514463   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880270825s)
	I0920 22:21:04.514501   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 22:21:04.514506   64076 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.880277859s)
	I0920 22:21:04.514634   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:04.514530   64076 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:21:04.514708   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:21:07.903361   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.388614598s)
	I0920 22:21:07.903407   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 22:21:07.903447   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:21:07.903457   64076 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.388799941s)
	I0920 22:21:07.903508   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:21:07.903525   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:07.948935   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 22:21:07.949066   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:06.867117   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:09.365054   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:06.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:06.732185   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.232008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.732202   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.232026   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.732223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.231619   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.232303   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.731940   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.432619   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:10.432916   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:09.377740   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.47420389s)
	I0920 22:21:09.377777   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 22:21:09.377799   64076 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:21:09.377799   64076 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.428713149s)
	I0920 22:21:09.377821   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 22:21:09.377855   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:21:11.449954   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.072068983s)
	I0920 22:21:11.449989   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 22:21:11.450021   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:21:11.450070   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:21:11.366565   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:13.865931   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:11.231381   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:11.731874   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.232048   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.732168   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.232175   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.732162   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.231648   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.732176   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.232078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.732039   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.931619   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:14.931872   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:17.431853   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:13.420242   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.970150355s)
	I0920 22:21:13.420269   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 22:21:13.420292   64076 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:13.420327   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:14.070854   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 22:21:14.070905   64076 cache_images.go:123] Successfully loaded all cached images
	I0920 22:21:14.070914   64076 cache_images.go:92] duration metric: took 15.080004062s to LoadCachedImages
	I0920 22:21:14.070930   64076 kubeadm.go:934] updating node { 192.168.72.19 8443 v1.31.1 crio true true} ...
	I0920 22:21:14.071097   64076 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-032884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:21:14.071168   64076 ssh_runner.go:195] Run: crio config
	I0920 22:21:14.124217   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:21:14.124245   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:21:14.124254   64076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:21:14.124274   64076 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.19 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-032884 NodeName:no-preload-032884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:21:14.124421   64076 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-032884"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:21:14.124480   64076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:21:14.136928   64076 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:21:14.136997   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:21:14.147175   64076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 22:21:14.164546   64076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:21:14.181818   64076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0920 22:21:14.199098   64076 ssh_runner.go:195] Run: grep 192.168.72.19	control-plane.minikube.internal$ /etc/hosts
	I0920 22:21:14.203042   64076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:21:14.216620   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:21:14.344665   64076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:21:14.365604   64076 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884 for IP: 192.168.72.19
	I0920 22:21:14.365629   64076 certs.go:194] generating shared ca certs ...
	I0920 22:21:14.365649   64076 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:21:14.365827   64076 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:21:14.365885   64076 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:21:14.365899   64076 certs.go:256] generating profile certs ...
	I0920 22:21:14.366012   64076 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.key
	I0920 22:21:14.366104   64076 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.key.bbadb61c
	I0920 22:21:14.366162   64076 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.key
	I0920 22:21:14.366353   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:21:14.366400   64076 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:21:14.366414   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:21:14.366450   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:21:14.366489   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:21:14.366519   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:21:14.366575   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:21:14.367591   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:21:14.427412   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:21:14.462466   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:21:14.501761   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:21:14.536382   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 22:21:14.571781   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:21:14.598326   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:21:14.623017   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:21:14.648918   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:21:14.675898   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:21:14.701225   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:21:14.726996   64076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:21:14.748808   64076 ssh_runner.go:195] Run: openssl version
	I0920 22:21:14.755400   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:21:14.768456   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.774027   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.774096   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.780817   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:21:14.796292   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:21:14.810475   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.815697   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.815773   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.822267   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:21:14.834264   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:21:14.846103   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.850719   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.850783   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.856898   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:21:14.868699   64076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:21:14.873403   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:21:14.879636   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:21:14.885839   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:21:14.892035   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:21:14.897881   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:21:14.903783   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:21:14.909649   64076 kubeadm.go:392] StartCluster: {Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:21:14.909737   64076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:21:14.909780   64076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:21:14.953105   64076 cri.go:89] found id: ""
	I0920 22:21:14.953180   64076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:21:14.963403   64076 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:21:14.963423   64076 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:21:14.963471   64076 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:21:14.973488   64076 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:21:14.974547   64076 kubeconfig.go:125] found "no-preload-032884" server: "https://192.168.72.19:8443"
	I0920 22:21:14.976821   64076 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:21:14.986722   64076 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.19
	I0920 22:21:14.986757   64076 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:21:14.986770   64076 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:21:14.986842   64076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:21:15.023234   64076 cri.go:89] found id: ""
	I0920 22:21:15.023305   64076 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:21:15.041045   64076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:21:15.051274   64076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:21:15.051297   64076 kubeadm.go:157] found existing configuration files:
	
	I0920 22:21:15.051341   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:21:15.061647   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:21:15.061697   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:21:15.072071   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:21:15.081989   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:21:15.082043   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:21:15.092302   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:21:15.102314   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:21:15.102372   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:21:15.112160   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:21:15.121675   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:21:15.121733   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:21:15.132459   64076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:21:15.143233   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:15.261579   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:16.912557   64076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.650944403s)
	I0920 22:21:16.912593   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.142147   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.220002   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.345551   64076 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:21:17.345659   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.846789   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.365656   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:18.366203   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:16.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.731672   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.232244   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.731640   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.231400   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.231727   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.731713   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.231984   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.732224   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.346730   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.385972   64076 api_server.go:72] duration metric: took 1.040420208s to wait for apiserver process to appear ...
	I0920 22:21:18.385999   64076 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:21:18.386027   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:18.386479   64076 api_server.go:269] stopped: https://192.168.72.19:8443/healthz: Get "https://192.168.72.19:8443/healthz": dial tcp 192.168.72.19:8443: connect: connection refused
	I0920 22:21:18.886429   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.873610   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.873654   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:20.873666   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.955077   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.955105   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:20.955121   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.968034   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.968062   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:21.386680   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:21.391109   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:21:21.391148   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:21:21.886726   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:21.891765   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:21:21.891802   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:21:22.386330   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:22.394934   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 200:
	ok
	I0920 22:21:22.404568   64076 api_server.go:141] control plane version: v1.31.1
	I0920 22:21:22.404599   64076 api_server.go:131] duration metric: took 4.018587856s to wait for apiserver health ...
	I0920 22:21:22.404608   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:21:22.404614   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:21:22.406538   64076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:21:19.930612   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:21.931883   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:22.408344   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:21:22.424308   64076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:21:22.460962   64076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:21:22.477489   64076 system_pods.go:59] 8 kube-system pods found
	I0920 22:21:22.477609   64076 system_pods.go:61] "coredns-7c65d6cfc9-pdcjb" [9c28694b-a51e-416c-9abc-39e692aa2f28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:21:22.477652   64076 system_pods.go:61] "etcd-no-preload-032884" [ee560f0c-02e1-4694-af7c-64a928e77b4d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:21:22.477685   64076 system_pods.go:61] "kube-apiserver-no-preload-032884" [9861f45c-6e90-4756-b045-2d8e3426dc9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:21:22.477706   64076 system_pods.go:61] "kube-controller-manager-no-preload-032884" [f16002bd-2240-4645-a49b-ef9e8c0588e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:21:22.477727   64076 system_pods.go:61] "kube-proxy-mp98d" [75b05d31-33ee-407f-80dc-360b71c17cba] Running
	I0920 22:21:22.477746   64076 system_pods.go:61] "kube-scheduler-no-preload-032884" [72404a91-c915-4f6e-9dbb-d8df10564d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:21:22.477772   64076 system_pods.go:61] "metrics-server-6867b74b74-x9qjw" [ae39ba1f-b2b9-42dc-9f7c-ac44997798aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:21:22.477792   64076 system_pods.go:61] "storage-provisioner" [cbd15398-a74b-4010-8fb1-f90e13d4f682] Running
	I0920 22:21:22.477811   64076 system_pods.go:74] duration metric: took 16.828942ms to wait for pod list to return data ...
	I0920 22:21:22.477835   64076 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:21:22.486981   64076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:21:22.487019   64076 node_conditions.go:123] node cpu capacity is 2
	I0920 22:21:22.487033   64076 node_conditions.go:105] duration metric: took 9.17624ms to run NodePressure ...
	I0920 22:21:22.487056   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:22.780784   64076 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:21:22.785269   64076 kubeadm.go:739] kubelet initialised
	I0920 22:21:22.785300   64076 kubeadm.go:740] duration metric: took 4.478578ms waiting for restarted kubelet to initialise ...
	I0920 22:21:22.785310   64076 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:21:22.800811   64076 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.810781   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.810819   64076 pod_ready.go:82] duration metric: took 9.97275ms for pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.810832   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.810841   64076 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.821154   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "etcd-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.821183   64076 pod_ready.go:82] duration metric: took 10.330752ms for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.821194   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "etcd-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.821202   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.827310   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "kube-apiserver-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.827352   64076 pod_ready.go:82] duration metric: took 6.141087ms for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.827365   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "kube-apiserver-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.827374   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.870394   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.870419   64076 pod_ready.go:82] duration metric: took 43.034455ms for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.870428   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.870434   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mp98d" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:23.264677   64076 pod_ready.go:93] pod "kube-proxy-mp98d" in "kube-system" namespace has status "Ready":"True"
	I0920 22:21:23.264706   64076 pod_ready.go:82] duration metric: took 394.262896ms for pod "kube-proxy-mp98d" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:23.264719   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:20.866321   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:22.866504   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:21.232197   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:21.732280   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.232223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.731989   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.231835   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.731412   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.232159   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.731357   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.731688   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.432658   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:26.930043   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:25.272790   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:27.772961   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:25.366481   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:27.866181   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:26.232179   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:26.731338   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.232243   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.731637   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.231903   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.732246   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.231673   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.732195   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.231646   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.731750   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.930324   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:30.930675   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:29.775001   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:31.771168   64076 pod_ready.go:93] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:21:31.771198   64076 pod_ready.go:82] duration metric: took 8.506469796s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:31.771210   64076 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:30.365604   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:32.366379   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:34.865420   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:31.232139   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:31.732146   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.231399   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.231350   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.731518   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.231301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.731329   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.231639   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.731944   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.430342   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:35.430373   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:37.430529   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:33.779291   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.277659   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:38.278264   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.865728   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:39.364784   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.231341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:36.732138   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.732107   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.232187   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.231592   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.732193   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.232273   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.732121   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.931150   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:42.430690   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:40.778518   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:43.276961   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:41.368031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:43.865439   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:41.231524   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:41.731282   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.232204   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.732205   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.232173   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.732209   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.231528   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.732008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.231572   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.731279   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:45.731374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:45.771664   65164 cri.go:89] found id: ""
	I0920 22:21:45.771685   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.771692   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:45.771697   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:45.771753   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:45.805953   65164 cri.go:89] found id: ""
	I0920 22:21:45.805983   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.805994   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:45.806001   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:45.806070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:45.841111   65164 cri.go:89] found id: ""
	I0920 22:21:45.841140   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.841152   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:45.841159   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:45.841217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:44.430726   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:46.431384   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.277558   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:47.277869   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.865916   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:48.366375   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.877729   65164 cri.go:89] found id: ""
	I0920 22:21:45.877750   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.877758   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:45.877764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:45.877814   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:45.919400   65164 cri.go:89] found id: ""
	I0920 22:21:45.919429   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.919440   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:45.919447   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:45.919504   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:45.957169   65164 cri.go:89] found id: ""
	I0920 22:21:45.957199   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.957220   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:45.957233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:45.957289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:46.001079   65164 cri.go:89] found id: ""
	I0920 22:21:46.001110   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.001121   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:46.001128   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:46.001194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:46.037774   65164 cri.go:89] found id: ""
	I0920 22:21:46.037806   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.037818   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:46.037830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:46.037843   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:46.079990   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:46.080016   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:46.129685   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:46.129718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:46.144045   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:46.144078   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:46.282272   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:46.282303   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:46.282318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:48.855633   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:48.871768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:48.871829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:48.911870   65164 cri.go:89] found id: ""
	I0920 22:21:48.911901   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.911913   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:48.911923   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:48.911991   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:48.950456   65164 cri.go:89] found id: ""
	I0920 22:21:48.950481   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.950492   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:48.950500   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:48.950562   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:48.988787   65164 cri.go:89] found id: ""
	I0920 22:21:48.988817   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.988827   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:48.988835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:48.988883   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:49.028550   65164 cri.go:89] found id: ""
	I0920 22:21:49.028580   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.028590   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:49.028600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:49.028659   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:49.066926   65164 cri.go:89] found id: ""
	I0920 22:21:49.066951   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.066977   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:49.066986   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:49.067047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:49.106777   65164 cri.go:89] found id: ""
	I0920 22:21:49.106807   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.106816   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:49.106822   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:49.106879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:49.143823   65164 cri.go:89] found id: ""
	I0920 22:21:49.143856   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.143867   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:49.143875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:49.143935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:49.181471   65164 cri.go:89] found id: ""
	I0920 22:21:49.181496   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.181504   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:49.181512   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:49.181523   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:49.266480   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:49.266517   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:49.326247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:49.326273   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:49.387982   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:49.388022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:49.404125   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:49.404157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:49.485426   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:48.932370   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.430552   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:49.278734   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.777609   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:50.865089   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:53.365218   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.985703   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:51.999662   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:51.999723   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:52.035758   65164 cri.go:89] found id: ""
	I0920 22:21:52.035790   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.035800   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:52.035808   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:52.035865   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:52.082846   65164 cri.go:89] found id: ""
	I0920 22:21:52.082869   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.082877   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:52.082883   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:52.082954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:52.119258   65164 cri.go:89] found id: ""
	I0920 22:21:52.119299   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.119311   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:52.119319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:52.119380   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:52.155609   65164 cri.go:89] found id: ""
	I0920 22:21:52.155635   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.155645   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:52.155652   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:52.155710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:52.191493   65164 cri.go:89] found id: ""
	I0920 22:21:52.191519   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.191527   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:52.191534   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:52.191608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:52.227088   65164 cri.go:89] found id: ""
	I0920 22:21:52.227121   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.227131   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:52.227138   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:52.227204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:52.264503   65164 cri.go:89] found id: ""
	I0920 22:21:52.264529   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.264538   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:52.264545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:52.264611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:52.302158   65164 cri.go:89] found id: ""
	I0920 22:21:52.302185   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.302193   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:52.302202   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:52.302217   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:52.356130   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:52.356168   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:52.373072   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:52.373096   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:52.446415   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:52.446437   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:52.446454   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:52.527964   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:52.528009   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.077589   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:55.091560   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:55.091615   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:55.127894   65164 cri.go:89] found id: ""
	I0920 22:21:55.127922   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.127949   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:55.127971   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:55.128030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:55.162522   65164 cri.go:89] found id: ""
	I0920 22:21:55.162546   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.162554   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:55.162562   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:55.162617   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:55.199445   65164 cri.go:89] found id: ""
	I0920 22:21:55.199470   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.199480   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:55.199488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:55.199546   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:55.239316   65164 cri.go:89] found id: ""
	I0920 22:21:55.239340   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.239350   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:55.239357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:55.239418   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:55.278165   65164 cri.go:89] found id: ""
	I0920 22:21:55.278194   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.278204   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:55.278212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:55.278270   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:55.313143   65164 cri.go:89] found id: ""
	I0920 22:21:55.313173   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.313184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:55.313192   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:55.313261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:55.352661   65164 cri.go:89] found id: ""
	I0920 22:21:55.352691   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.352699   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:55.352704   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:55.352751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:55.390108   65164 cri.go:89] found id: ""
	I0920 22:21:55.390147   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.390158   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:55.390169   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:55.390184   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:55.403652   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:55.403680   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:55.490456   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:55.490477   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:55.490488   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:55.573164   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:55.573202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.615158   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:55.615188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:53.430808   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.431168   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:57.431504   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:53.778433   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.778670   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:58.278404   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.365277   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:57.365842   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:59.365976   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:58.165907   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:58.180329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:58.180397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:58.215920   65164 cri.go:89] found id: ""
	I0920 22:21:58.215946   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.215954   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:58.215973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:58.216030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:58.253178   65164 cri.go:89] found id: ""
	I0920 22:21:58.253211   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.253222   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:58.253229   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:58.253287   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:58.289345   65164 cri.go:89] found id: ""
	I0920 22:21:58.289373   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.289384   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:58.289391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:58.289451   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:58.326430   65164 cri.go:89] found id: ""
	I0920 22:21:58.326460   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.326468   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:58.326476   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:58.326535   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:58.362723   65164 cri.go:89] found id: ""
	I0920 22:21:58.362749   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.362761   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:58.362769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:58.362824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:58.398827   65164 cri.go:89] found id: ""
	I0920 22:21:58.398857   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.398867   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:58.398875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:58.398928   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:58.444833   65164 cri.go:89] found id: ""
	I0920 22:21:58.444864   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.444875   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:58.444883   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:58.444948   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:58.482692   65164 cri.go:89] found id: ""
	I0920 22:21:58.482723   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.482735   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:58.482745   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:58.482756   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:58.567138   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:58.567172   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:58.606392   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:58.606421   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:58.658568   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:58.658605   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:58.672830   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:58.672855   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:58.742602   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:59.930574   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:02.431132   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:00.279246   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:02.778434   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:01.865439   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:04.366915   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:01.243532   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:01.257408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:01.257493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:01.295259   65164 cri.go:89] found id: ""
	I0920 22:22:01.295284   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.295292   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:01.295297   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:01.295348   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:01.334119   65164 cri.go:89] found id: ""
	I0920 22:22:01.334145   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.334157   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:01.334164   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:01.334222   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:01.376955   65164 cri.go:89] found id: ""
	I0920 22:22:01.376985   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.376997   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:01.377005   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:01.377063   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:01.416680   65164 cri.go:89] found id: ""
	I0920 22:22:01.416703   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.416712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:01.416718   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:01.416783   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:01.455042   65164 cri.go:89] found id: ""
	I0920 22:22:01.455064   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.455072   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:01.455077   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:01.455135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:01.499243   65164 cri.go:89] found id: ""
	I0920 22:22:01.499271   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.499283   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:01.499290   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:01.499347   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:01.535757   65164 cri.go:89] found id: ""
	I0920 22:22:01.535793   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.535804   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:01.535812   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:01.535876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:01.574409   65164 cri.go:89] found id: ""
	I0920 22:22:01.574447   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.574459   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:01.574470   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:01.574485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:01.615629   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:01.615659   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:01.668446   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:01.668481   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:01.683044   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:01.683071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:01.751190   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:01.751210   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:01.751221   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.335881   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:04.351092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:04.351149   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:04.386306   65164 cri.go:89] found id: ""
	I0920 22:22:04.386337   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.386347   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:04.386365   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:04.386424   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:04.419440   65164 cri.go:89] found id: ""
	I0920 22:22:04.419472   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.419502   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:04.419511   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:04.419571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:04.456005   65164 cri.go:89] found id: ""
	I0920 22:22:04.456043   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.456054   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:04.456061   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:04.456125   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:04.490878   65164 cri.go:89] found id: ""
	I0920 22:22:04.490909   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.490919   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:04.490927   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:04.491000   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:04.527312   65164 cri.go:89] found id: ""
	I0920 22:22:04.527342   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.527353   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:04.527361   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:04.527441   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:04.564441   65164 cri.go:89] found id: ""
	I0920 22:22:04.564471   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.564481   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:04.564487   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:04.564542   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:04.598843   65164 cri.go:89] found id: ""
	I0920 22:22:04.598870   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.598878   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:04.598884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:04.598946   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:04.632575   65164 cri.go:89] found id: ""
	I0920 22:22:04.632604   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.632615   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:04.632625   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:04.632640   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:04.683629   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:04.683668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:04.697138   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:04.697169   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:04.768051   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:04.768076   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:04.768087   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.848054   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:04.848102   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:04.930934   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.431086   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:05.279145   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.778351   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:06.865475   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:08.865586   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.396960   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:07.410612   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:07.410681   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:07.450652   65164 cri.go:89] found id: ""
	I0920 22:22:07.450680   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.450688   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:07.450694   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:07.450739   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:07.491547   65164 cri.go:89] found id: ""
	I0920 22:22:07.491577   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.491588   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:07.491595   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:07.491652   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:07.532025   65164 cri.go:89] found id: ""
	I0920 22:22:07.532060   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.532069   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:07.532076   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:07.532135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:07.574139   65164 cri.go:89] found id: ""
	I0920 22:22:07.574175   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.574186   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:07.574194   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:07.574257   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:07.614572   65164 cri.go:89] found id: ""
	I0920 22:22:07.614599   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.614610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:07.614618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:07.614679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:07.651975   65164 cri.go:89] found id: ""
	I0920 22:22:07.652012   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.652022   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:07.652029   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:07.652091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:07.690273   65164 cri.go:89] found id: ""
	I0920 22:22:07.690300   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.690311   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:07.690319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:07.690379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:07.727144   65164 cri.go:89] found id: ""
	I0920 22:22:07.727169   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.727179   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:07.727189   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:07.727204   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:07.784607   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:07.784645   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:07.800082   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:07.800113   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:07.872842   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:07.872870   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:07.872885   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:07.953169   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:07.953205   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:10.496094   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:10.509645   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:10.509711   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:10.544590   65164 cri.go:89] found id: ""
	I0920 22:22:10.544620   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.544628   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:10.544634   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:10.544690   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:10.581062   65164 cri.go:89] found id: ""
	I0920 22:22:10.581091   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.581100   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:10.581106   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:10.581166   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:10.616337   65164 cri.go:89] found id: ""
	I0920 22:22:10.616363   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.616372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:10.616380   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:10.616438   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:10.655747   65164 cri.go:89] found id: ""
	I0920 22:22:10.655769   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.655777   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:10.655784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:10.655832   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:10.689158   65164 cri.go:89] found id: ""
	I0920 22:22:10.689188   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.689199   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:10.689207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:10.689269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:10.724321   65164 cri.go:89] found id: ""
	I0920 22:22:10.724355   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.724369   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:10.724376   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:10.724447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:10.764105   65164 cri.go:89] found id: ""
	I0920 22:22:10.764134   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.764143   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:10.764148   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:10.764217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:10.802620   65164 cri.go:89] found id: ""
	I0920 22:22:10.802650   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.802664   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:10.802674   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:10.802687   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:10.854739   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:10.854773   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:09.931917   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:12.430948   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.279412   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:12.778166   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.866822   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:13.366013   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.869974   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:10.869996   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:10.945364   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:10.945386   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:10.945399   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:11.029379   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:11.029419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:13.576924   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:13.591641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:13.591716   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:13.626683   65164 cri.go:89] found id: ""
	I0920 22:22:13.626710   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.626718   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:13.626724   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:13.626770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:13.666368   65164 cri.go:89] found id: ""
	I0920 22:22:13.666400   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.666408   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:13.666413   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:13.666469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:13.702596   65164 cri.go:89] found id: ""
	I0920 22:22:13.702618   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.702634   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:13.702642   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:13.702706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:13.736359   65164 cri.go:89] found id: ""
	I0920 22:22:13.736382   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.736390   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:13.736396   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:13.736446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:13.770584   65164 cri.go:89] found id: ""
	I0920 22:22:13.770615   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.770626   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:13.770633   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:13.770693   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:13.808637   65164 cri.go:89] found id: ""
	I0920 22:22:13.808668   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.808680   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:13.808688   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:13.808748   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:13.844429   65164 cri.go:89] found id: ""
	I0920 22:22:13.844476   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.844489   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:13.844497   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:13.844563   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:13.882730   65164 cri.go:89] found id: ""
	I0920 22:22:13.882759   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.882770   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:13.882778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:13.882790   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:13.939998   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:13.940028   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:13.954014   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:13.954043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:14.031991   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:14.032016   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:14.032030   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:14.109378   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:14.109413   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:14.431794   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:16.932980   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:15.277777   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:17.779835   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:15.865687   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:17.865735   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:19.866465   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:16.646761   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:16.660458   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:16.660522   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:16.695512   65164 cri.go:89] found id: ""
	I0920 22:22:16.695537   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.695545   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:16.695552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:16.695612   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:16.740479   65164 cri.go:89] found id: ""
	I0920 22:22:16.740508   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.740518   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:16.740529   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:16.740589   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:16.777259   65164 cri.go:89] found id: ""
	I0920 22:22:16.777291   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.777302   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:16.777309   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:16.777355   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:16.833157   65164 cri.go:89] found id: ""
	I0920 22:22:16.833187   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.833204   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:16.833212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:16.833272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:16.868589   65164 cri.go:89] found id: ""
	I0920 22:22:16.868618   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.868627   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:16.868635   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:16.868694   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:16.907229   65164 cri.go:89] found id: ""
	I0920 22:22:16.907257   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.907269   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:16.907277   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:16.907336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:16.945317   65164 cri.go:89] found id: ""
	I0920 22:22:16.945343   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.945351   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:16.945357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:16.945413   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:16.982159   65164 cri.go:89] found id: ""
	I0920 22:22:16.982183   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.982197   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:16.982206   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:16.982218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:17.034369   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:17.034416   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:17.049644   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:17.049677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:17.127816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:17.127840   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:17.127852   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:17.217213   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:17.217248   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:19.766957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:19.781545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:19.781627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:19.819616   65164 cri.go:89] found id: ""
	I0920 22:22:19.819642   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.819650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:19.819656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:19.819717   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:19.860983   65164 cri.go:89] found id: ""
	I0920 22:22:19.861009   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.861020   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:19.861027   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:19.861083   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:19.904051   65164 cri.go:89] found id: ""
	I0920 22:22:19.904075   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.904083   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:19.904088   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:19.904153   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:19.940246   65164 cri.go:89] found id: ""
	I0920 22:22:19.940273   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.940280   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:19.940286   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:19.940332   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:19.980380   65164 cri.go:89] found id: ""
	I0920 22:22:19.980404   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.980412   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:19.980418   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:19.980496   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:20.019603   65164 cri.go:89] found id: ""
	I0920 22:22:20.019625   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.019633   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:20.019639   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:20.019689   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:20.060667   65164 cri.go:89] found id: ""
	I0920 22:22:20.060696   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.060704   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:20.060709   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:20.060766   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:20.104432   65164 cri.go:89] found id: ""
	I0920 22:22:20.104464   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.104475   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:20.104486   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:20.104500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:20.148433   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:20.148466   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:20.202370   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:20.202407   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:20.216482   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:20.216515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:20.297053   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:20.297071   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:20.297086   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:18.936098   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:21.430878   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:20.277726   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.777347   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.366027   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:24.366241   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.879569   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:22.893624   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:22.893683   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:22.937850   65164 cri.go:89] found id: ""
	I0920 22:22:22.937878   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.937886   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:22.937896   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:22.937944   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:22.974939   65164 cri.go:89] found id: ""
	I0920 22:22:22.974971   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.974983   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:22.974990   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:22.975057   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:23.013689   65164 cri.go:89] found id: ""
	I0920 22:22:23.013723   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.013735   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:23.013742   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:23.013802   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:23.052092   65164 cri.go:89] found id: ""
	I0920 22:22:23.052123   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.052134   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:23.052141   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:23.052201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:23.092204   65164 cri.go:89] found id: ""
	I0920 22:22:23.092231   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.092248   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:23.092255   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:23.092317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:23.146265   65164 cri.go:89] found id: ""
	I0920 22:22:23.146294   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.146304   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:23.146312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:23.146373   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:23.183802   65164 cri.go:89] found id: ""
	I0920 22:22:23.183832   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.183843   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:23.183865   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:23.183932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:23.219604   65164 cri.go:89] found id: ""
	I0920 22:22:23.219633   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.219643   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:23.219654   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:23.219671   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:23.268138   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:23.268171   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:23.285309   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:23.285343   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:23.362940   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:23.362985   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:23.363006   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:23.450369   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:23.450411   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:23.431589   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:25.930565   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:24.778456   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:27.278474   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:26.866577   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:29.364505   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:26.001038   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:26.016893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:26.016956   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:26.053611   65164 cri.go:89] found id: ""
	I0920 22:22:26.053641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.053653   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:26.053661   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:26.053710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:26.093966   65164 cri.go:89] found id: ""
	I0920 22:22:26.093994   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.094005   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:26.094012   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:26.094077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:26.134321   65164 cri.go:89] found id: ""
	I0920 22:22:26.134360   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.134372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:26.134379   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:26.134442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:26.171597   65164 cri.go:89] found id: ""
	I0920 22:22:26.171641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.171651   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:26.171659   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:26.171730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:26.207751   65164 cri.go:89] found id: ""
	I0920 22:22:26.207778   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.207787   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:26.207793   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:26.207845   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:26.249174   65164 cri.go:89] found id: ""
	I0920 22:22:26.249201   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.249217   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:26.249225   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:26.249286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:26.284627   65164 cri.go:89] found id: ""
	I0920 22:22:26.284656   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.284666   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:26.284673   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:26.284729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:26.323942   65164 cri.go:89] found id: ""
	I0920 22:22:26.323968   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.323980   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:26.323991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:26.324005   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:26.379698   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:26.379737   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:26.394553   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:26.394584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:26.472301   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:26.472328   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:26.472354   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:26.549464   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:26.549509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.091838   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:29.115120   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:29.115198   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:29.161609   65164 cri.go:89] found id: ""
	I0920 22:22:29.161632   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.161640   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:29.161646   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:29.161709   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:29.197886   65164 cri.go:89] found id: ""
	I0920 22:22:29.197912   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.197920   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:29.197926   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:29.197978   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:29.236953   65164 cri.go:89] found id: ""
	I0920 22:22:29.236980   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.236991   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:29.236999   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:29.237059   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:29.273573   65164 cri.go:89] found id: ""
	I0920 22:22:29.273603   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.273614   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:29.273622   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:29.273679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:29.308752   65164 cri.go:89] found id: ""
	I0920 22:22:29.308776   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.308784   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:29.308789   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:29.308847   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:29.348578   65164 cri.go:89] found id: ""
	I0920 22:22:29.348608   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.348620   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:29.348627   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:29.348685   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:29.385184   65164 cri.go:89] found id: ""
	I0920 22:22:29.385225   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.385237   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:29.385246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:29.385308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:29.421935   65164 cri.go:89] found id: ""
	I0920 22:22:29.421968   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.421979   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:29.421990   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:29.422012   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:29.438574   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:29.438606   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:29.515334   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:29.515362   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:29.515376   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:29.602411   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:29.602451   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.646284   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:29.646320   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:27.931022   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:30.430935   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:32.431673   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:29.278850   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:31.778403   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:31.364592   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:33.864487   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:32.196699   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:32.211260   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:32.211338   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:32.248007   65164 cri.go:89] found id: ""
	I0920 22:22:32.248040   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.248052   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:32.248060   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:32.248122   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:32.285451   65164 cri.go:89] found id: ""
	I0920 22:22:32.285481   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.285490   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:32.285496   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:32.285548   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:32.321721   65164 cri.go:89] found id: ""
	I0920 22:22:32.321746   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.321756   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:32.321768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:32.321827   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:32.363669   65164 cri.go:89] found id: ""
	I0920 22:22:32.363698   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.363708   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:32.363717   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:32.363770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:32.402681   65164 cri.go:89] found id: ""
	I0920 22:22:32.402721   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.402729   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:32.402735   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:32.402786   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:32.445648   65164 cri.go:89] found id: ""
	I0920 22:22:32.445674   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.445684   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:32.445691   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:32.445750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:32.482291   65164 cri.go:89] found id: ""
	I0920 22:22:32.482315   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.482330   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:32.482337   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:32.482397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:32.523095   65164 cri.go:89] found id: ""
	I0920 22:22:32.523124   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.523134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:32.523145   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:32.523160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:32.537459   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:32.537489   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:32.614828   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:32.614853   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:32.614867   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:32.695690   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:32.695728   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:32.739620   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:32.739658   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:35.296323   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:35.312341   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:35.312417   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:35.349934   65164 cri.go:89] found id: ""
	I0920 22:22:35.349962   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.349972   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:35.349980   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:35.350052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:35.385427   65164 cri.go:89] found id: ""
	I0920 22:22:35.385457   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.385468   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:35.385475   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:35.385538   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:35.422118   65164 cri.go:89] found id: ""
	I0920 22:22:35.422146   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.422157   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:35.422165   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:35.422230   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:35.458552   65164 cri.go:89] found id: ""
	I0920 22:22:35.458573   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.458580   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:35.458586   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:35.458639   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:35.493575   65164 cri.go:89] found id: ""
	I0920 22:22:35.493601   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.493610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:35.493615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:35.493671   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:35.530316   65164 cri.go:89] found id: ""
	I0920 22:22:35.530342   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.530350   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:35.530356   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:35.530411   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:35.568247   65164 cri.go:89] found id: ""
	I0920 22:22:35.568279   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.568289   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:35.568303   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:35.568367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:35.605207   65164 cri.go:89] found id: ""
	I0920 22:22:35.605234   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.605242   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:35.605255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:35.605267   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:35.619771   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:35.619797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:35.699917   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:35.699939   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:35.699950   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:35.785310   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:35.785346   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:35.849112   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:35.849147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:34.930225   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:36.931840   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:34.277310   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:36.278180   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:35.866985   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.364684   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.405695   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:38.419499   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:38.419580   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:38.461618   65164 cri.go:89] found id: ""
	I0920 22:22:38.461642   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.461650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:38.461656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:38.461713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:38.504837   65164 cri.go:89] found id: ""
	I0920 22:22:38.504859   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.504867   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:38.504872   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:38.504923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:38.539399   65164 cri.go:89] found id: ""
	I0920 22:22:38.539434   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.539446   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:38.539453   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:38.539518   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:38.576129   65164 cri.go:89] found id: ""
	I0920 22:22:38.576155   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.576165   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:38.576173   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:38.576233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:38.611383   65164 cri.go:89] found id: ""
	I0920 22:22:38.611411   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.611423   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:38.611430   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:38.611491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:38.650553   65164 cri.go:89] found id: ""
	I0920 22:22:38.650583   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.650594   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:38.650602   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:38.650668   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:38.686734   65164 cri.go:89] found id: ""
	I0920 22:22:38.686756   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.686764   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:38.686769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:38.686829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:38.726268   65164 cri.go:89] found id: ""
	I0920 22:22:38.726301   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.726312   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:38.726323   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:38.726336   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:38.801995   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:38.802042   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:38.843775   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:38.843811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:38.897874   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:38.897909   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:38.912111   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:38.912136   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:38.989021   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:38.932311   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.432699   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.778194   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.277875   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:43.278108   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:40.364970   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:42.366255   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:44.865678   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.489341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:41.515667   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:41.515724   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:41.567019   65164 cri.go:89] found id: ""
	I0920 22:22:41.567047   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.567057   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:41.567066   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:41.567128   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:41.625316   65164 cri.go:89] found id: ""
	I0920 22:22:41.625347   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.625357   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:41.625371   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:41.625436   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:41.663420   65164 cri.go:89] found id: ""
	I0920 22:22:41.663448   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.663459   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:41.663467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:41.663529   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:41.701639   65164 cri.go:89] found id: ""
	I0920 22:22:41.701670   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.701682   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:41.701689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:41.701751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:41.739740   65164 cri.go:89] found id: ""
	I0920 22:22:41.739768   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.739778   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:41.739786   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:41.739843   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:41.781365   65164 cri.go:89] found id: ""
	I0920 22:22:41.781390   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.781401   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:41.781408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:41.781467   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:41.818708   65164 cri.go:89] found id: ""
	I0920 22:22:41.818732   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.818740   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:41.818746   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:41.818793   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:41.854193   65164 cri.go:89] found id: ""
	I0920 22:22:41.854226   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.854237   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:41.854249   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:41.854266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:41.923678   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:41.923701   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:41.923716   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:42.006009   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:42.006045   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:42.044745   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:42.044777   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:42.099462   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:42.099500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:44.614787   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:44.629100   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:44.629178   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:44.672347   65164 cri.go:89] found id: ""
	I0920 22:22:44.672381   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.672392   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:44.672401   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:44.672525   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:44.717617   65164 cri.go:89] found id: ""
	I0920 22:22:44.717646   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.717657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:44.717664   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:44.717730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:44.753035   65164 cri.go:89] found id: ""
	I0920 22:22:44.753062   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.753073   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:44.753092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:44.753165   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:44.790745   65164 cri.go:89] found id: ""
	I0920 22:22:44.790774   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.790785   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:44.790792   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:44.790859   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:44.828604   65164 cri.go:89] found id: ""
	I0920 22:22:44.828630   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.828640   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:44.828647   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:44.828706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:44.865982   65164 cri.go:89] found id: ""
	I0920 22:22:44.866008   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.866019   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:44.866026   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:44.866077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:44.903473   65164 cri.go:89] found id: ""
	I0920 22:22:44.903499   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.903508   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:44.903514   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:44.903571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:44.943481   65164 cri.go:89] found id: ""
	I0920 22:22:44.943509   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.943521   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:44.943530   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:44.943543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:44.996783   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:44.996823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:45.011531   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:45.011560   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:45.087225   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:45.087252   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:45.087266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:45.166460   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:45.166500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:43.931267   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:46.430497   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:45.778462   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.783057   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.365053   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:49.864478   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.709426   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:47.723507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:47.723590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:47.758203   65164 cri.go:89] found id: ""
	I0920 22:22:47.758228   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.758238   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:47.758246   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:47.758304   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:47.796098   65164 cri.go:89] found id: ""
	I0920 22:22:47.796129   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.796140   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:47.796148   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:47.796210   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:47.831860   65164 cri.go:89] found id: ""
	I0920 22:22:47.831889   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.831899   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:47.831906   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:47.831966   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:47.866241   65164 cri.go:89] found id: ""
	I0920 22:22:47.866266   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.866274   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:47.866281   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:47.866337   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:47.905029   65164 cri.go:89] found id: ""
	I0920 22:22:47.905055   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.905063   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:47.905073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:47.905124   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:47.942290   65164 cri.go:89] found id: ""
	I0920 22:22:47.942322   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.942332   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:47.942340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:47.942400   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:47.977901   65164 cri.go:89] found id: ""
	I0920 22:22:47.977930   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.977939   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:47.977946   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:47.978012   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:48.014682   65164 cri.go:89] found id: ""
	I0920 22:22:48.014715   65164 logs.go:276] 0 containers: []
	W0920 22:22:48.014726   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:48.014736   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:48.014749   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:48.067116   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:48.067160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:48.082926   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:48.082980   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:48.158209   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:48.158231   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:48.158243   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:48.239760   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:48.239866   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:50.788520   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:50.802410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:50.802485   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:50.844263   65164 cri.go:89] found id: ""
	I0920 22:22:50.844291   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.844303   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:50.844311   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:50.844367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:48.931174   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:51.431910   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:50.277510   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:52.279387   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:51.866270   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:54.366161   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:50.880271   65164 cri.go:89] found id: ""
	I0920 22:22:50.880301   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.880311   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:50.880324   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:50.880379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:50.915627   65164 cri.go:89] found id: ""
	I0920 22:22:50.915653   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.915663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:50.915671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:50.915725   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:50.951779   65164 cri.go:89] found id: ""
	I0920 22:22:50.951804   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.951823   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:50.951831   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:50.951879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:50.989174   65164 cri.go:89] found id: ""
	I0920 22:22:50.989201   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.989212   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:50.989219   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:50.989293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:51.024049   65164 cri.go:89] found id: ""
	I0920 22:22:51.024084   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.024094   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:51.024106   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:51.024167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:51.060837   65164 cri.go:89] found id: ""
	I0920 22:22:51.060865   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.060877   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:51.060884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:51.060934   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:51.099248   65164 cri.go:89] found id: ""
	I0920 22:22:51.099272   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.099285   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:51.099295   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:51.099308   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:51.148388   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:51.148420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:51.162735   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:51.162763   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:51.239020   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:51.239048   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:51.239063   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:51.318893   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:51.318934   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:53.860664   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:53.874434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:53.874491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:53.909449   65164 cri.go:89] found id: ""
	I0920 22:22:53.909475   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.909485   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:53.909493   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:53.909550   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:53.944367   65164 cri.go:89] found id: ""
	I0920 22:22:53.944395   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.944406   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:53.944414   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:53.944469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:53.980994   65164 cri.go:89] found id: ""
	I0920 22:22:53.981032   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.981043   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:53.981051   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:53.981110   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:54.018456   65164 cri.go:89] found id: ""
	I0920 22:22:54.018494   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.018506   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:54.018515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:54.018574   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:54.053763   65164 cri.go:89] found id: ""
	I0920 22:22:54.053793   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.053805   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:54.053813   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:54.053871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:54.091847   65164 cri.go:89] found id: ""
	I0920 22:22:54.091871   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.091883   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:54.091890   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:54.091951   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:54.126844   65164 cri.go:89] found id: ""
	I0920 22:22:54.126875   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.126885   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:54.126893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:54.126949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:54.168967   65164 cri.go:89] found id: ""
	I0920 22:22:54.168999   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.169010   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:54.169022   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:54.169043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:54.237771   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:54.237791   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:54.237806   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:54.319629   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:54.319668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:54.364470   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:54.364504   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:54.418798   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:54.418835   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:53.931126   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.430892   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:54.777269   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.777539   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.865494   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:59.365274   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.935104   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:56.948472   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:56.948545   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:56.983338   65164 cri.go:89] found id: ""
	I0920 22:22:56.983365   65164 logs.go:276] 0 containers: []
	W0920 22:22:56.983373   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:56.983379   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:56.983425   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:57.021781   65164 cri.go:89] found id: ""
	I0920 22:22:57.021809   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.021819   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:57.021825   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:57.021871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:57.057191   65164 cri.go:89] found id: ""
	I0920 22:22:57.057231   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.057239   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:57.057246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:57.057294   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:57.092567   65164 cri.go:89] found id: ""
	I0920 22:22:57.092595   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.092607   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:57.092615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:57.092669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:57.128098   65164 cri.go:89] found id: ""
	I0920 22:22:57.128125   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.128136   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:57.128143   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:57.128205   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:57.163001   65164 cri.go:89] found id: ""
	I0920 22:22:57.163029   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.163040   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:57.163048   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:57.163105   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:57.199392   65164 cri.go:89] found id: ""
	I0920 22:22:57.199424   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.199434   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:57.199442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:57.199491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:57.235132   65164 cri.go:89] found id: ""
	I0920 22:22:57.235155   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.235162   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:57.235170   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:57.235181   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:57.287112   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:57.287148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:57.301293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:57.301318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:57.379895   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:57.379920   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:57.379935   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:57.466022   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:57.466059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.006738   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:00.019871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:00.019949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:00.054127   65164 cri.go:89] found id: ""
	I0920 22:23:00.054155   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.054162   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:00.054168   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:00.054228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:00.089755   65164 cri.go:89] found id: ""
	I0920 22:23:00.089788   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.089800   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:00.089807   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:00.089867   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:00.125466   65164 cri.go:89] found id: ""
	I0920 22:23:00.125489   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.125497   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:00.125503   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:00.125559   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:00.166147   65164 cri.go:89] found id: ""
	I0920 22:23:00.166171   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.166178   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:00.166187   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:00.166248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:00.202752   65164 cri.go:89] found id: ""
	I0920 22:23:00.202781   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.202793   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:00.202801   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:00.202862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:00.240727   65164 cri.go:89] found id: ""
	I0920 22:23:00.240757   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.240768   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:00.240776   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:00.240833   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:00.277247   65164 cri.go:89] found id: ""
	I0920 22:23:00.277270   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.277278   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:00.277285   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:00.277341   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:00.313098   65164 cri.go:89] found id: ""
	I0920 22:23:00.313123   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.313134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:00.313144   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:00.313160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:00.328326   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:00.328353   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:00.397929   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:00.397961   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:00.397975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:00.479830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:00.479862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.519715   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:00.519745   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:58.929948   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:00.930012   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:58.778239   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:00.778442   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.279610   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:01.366521   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.366771   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.073360   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:03.088169   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:03.088236   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:03.129646   65164 cri.go:89] found id: ""
	I0920 22:23:03.129673   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.129682   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:03.129690   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:03.129751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:03.163771   65164 cri.go:89] found id: ""
	I0920 22:23:03.163801   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.163812   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:03.163820   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:03.163876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:03.201253   65164 cri.go:89] found id: ""
	I0920 22:23:03.201291   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.201299   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:03.201317   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:03.201374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:03.239623   65164 cri.go:89] found id: ""
	I0920 22:23:03.239652   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.239663   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:03.239670   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:03.239731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:03.275530   65164 cri.go:89] found id: ""
	I0920 22:23:03.275555   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.275564   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:03.275572   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:03.275631   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:03.312812   65164 cri.go:89] found id: ""
	I0920 22:23:03.312845   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.312855   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:03.312863   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:03.312923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:03.352975   65164 cri.go:89] found id: ""
	I0920 22:23:03.352998   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.353009   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:03.353016   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:03.353070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:03.390815   65164 cri.go:89] found id: ""
	I0920 22:23:03.390846   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.390856   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:03.390867   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:03.390880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:03.478231   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:03.478269   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:03.524991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:03.525024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:03.580114   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:03.580159   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:03.596120   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:03.596147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:03.671302   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:02.931281   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.429671   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:07.430177   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.781406   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:08.278876   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.367069   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:07.866058   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:09.866187   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:06.171484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:06.186410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:06.186480   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:06.222102   65164 cri.go:89] found id: ""
	I0920 22:23:06.222131   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.222139   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:06.222148   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:06.222206   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:06.257922   65164 cri.go:89] found id: ""
	I0920 22:23:06.257945   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.257953   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:06.257958   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:06.258006   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:06.299149   65164 cri.go:89] found id: ""
	I0920 22:23:06.299176   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.299184   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:06.299190   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:06.299244   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:06.336667   65164 cri.go:89] found id: ""
	I0920 22:23:06.336693   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.336703   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:06.336710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:06.336772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:06.379397   65164 cri.go:89] found id: ""
	I0920 22:23:06.379425   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.379435   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:06.379442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:06.379505   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:06.415945   65164 cri.go:89] found id: ""
	I0920 22:23:06.415976   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.415987   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:06.415993   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:06.416051   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:06.452902   65164 cri.go:89] found id: ""
	I0920 22:23:06.452932   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.452955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:06.452962   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:06.453033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:06.490284   65164 cri.go:89] found id: ""
	I0920 22:23:06.490314   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.490324   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:06.490334   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:06.490351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:06.529838   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:06.529862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:06.582421   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:06.582461   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:06.596504   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:06.596542   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:06.667816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:06.667836   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:06.667851   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.247952   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:09.264316   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:09.264386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:09.300577   65164 cri.go:89] found id: ""
	I0920 22:23:09.300612   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.300621   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:09.300627   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:09.300674   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:09.345945   65164 cri.go:89] found id: ""
	I0920 22:23:09.345972   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.345982   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:09.345991   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:09.346049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:09.385660   65164 cri.go:89] found id: ""
	I0920 22:23:09.385690   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.385700   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:09.385708   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:09.385775   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:09.422286   65164 cri.go:89] found id: ""
	I0920 22:23:09.422321   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.422332   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:09.422338   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:09.422397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:09.458377   65164 cri.go:89] found id: ""
	I0920 22:23:09.458402   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.458410   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:09.458416   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:09.458477   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:09.499470   65164 cri.go:89] found id: ""
	I0920 22:23:09.499498   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.499509   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:09.499517   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:09.499603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:09.535725   65164 cri.go:89] found id: ""
	I0920 22:23:09.535751   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.535759   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:09.535764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:09.535812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:09.575755   65164 cri.go:89] found id: ""
	I0920 22:23:09.575784   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.575795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:09.575807   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:09.575825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:09.590265   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:09.590294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:09.661559   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:09.661585   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:09.661617   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.741099   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:09.741143   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:09.785560   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:09.785589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:09.431051   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:11.930405   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:10.781647   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:13.278000   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:12.366060   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:14.864332   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:12.344085   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:12.358919   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:12.359021   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:12.397182   65164 cri.go:89] found id: ""
	I0920 22:23:12.397220   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.397229   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:12.397235   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:12.397293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:12.435133   65164 cri.go:89] found id: ""
	I0920 22:23:12.435162   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.435169   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:12.435177   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:12.435237   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:12.474391   65164 cri.go:89] found id: ""
	I0920 22:23:12.474416   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.474427   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:12.474434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:12.474493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:12.511992   65164 cri.go:89] found id: ""
	I0920 22:23:12.512015   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.512022   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:12.512028   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:12.512071   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:12.547248   65164 cri.go:89] found id: ""
	I0920 22:23:12.547269   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.547278   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:12.547289   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:12.547336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:12.582721   65164 cri.go:89] found id: ""
	I0920 22:23:12.582747   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.582759   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:12.582766   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:12.582837   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:12.620184   65164 cri.go:89] found id: ""
	I0920 22:23:12.620207   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.620215   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:12.620220   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:12.620272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:12.657416   65164 cri.go:89] found id: ""
	I0920 22:23:12.657441   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.657449   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:12.657458   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:12.657477   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:12.671189   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:12.671218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:12.742115   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:12.742139   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:12.742150   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:12.817551   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:12.817586   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:12.861940   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:12.861970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:15.413615   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:15.427127   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:15.427201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:15.462329   65164 cri.go:89] found id: ""
	I0920 22:23:15.462357   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.462366   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:15.462373   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:15.462430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:15.508791   65164 cri.go:89] found id: ""
	I0920 22:23:15.508816   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.508824   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:15.508834   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:15.508894   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:15.545381   65164 cri.go:89] found id: ""
	I0920 22:23:15.545403   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.545411   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:15.545417   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:15.545475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:15.585401   65164 cri.go:89] found id: ""
	I0920 22:23:15.585422   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.585430   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:15.585436   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:15.585494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:15.631507   65164 cri.go:89] found id: ""
	I0920 22:23:15.631529   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.631542   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:15.631549   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:15.631608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:15.671219   65164 cri.go:89] found id: ""
	I0920 22:23:15.671247   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.671270   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:15.671278   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:15.671333   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:15.711348   65164 cri.go:89] found id: ""
	I0920 22:23:15.711382   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.711393   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:15.711400   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:15.711463   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:15.750582   65164 cri.go:89] found id: ""
	I0920 22:23:15.750614   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.750624   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:15.750635   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:15.750649   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:15.766294   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:15.766321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:15.839079   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:15.839104   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:15.839117   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:14.430673   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:16.430824   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:15.278107   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:17.778678   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:16.866212   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:18.866838   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:15.920123   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:15.920156   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:15.964292   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:15.964321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.517484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:18.531117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:18.531192   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:18.567351   65164 cri.go:89] found id: ""
	I0920 22:23:18.567377   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.567389   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:18.567397   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:18.567447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:18.603319   65164 cri.go:89] found id: ""
	I0920 22:23:18.603344   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.603353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:18.603358   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:18.603408   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:18.639699   65164 cri.go:89] found id: ""
	I0920 22:23:18.639726   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.639737   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:18.639744   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:18.639806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:18.681880   65164 cri.go:89] found id: ""
	I0920 22:23:18.681910   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.681931   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:18.681939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:18.682002   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:18.716589   65164 cri.go:89] found id: ""
	I0920 22:23:18.716619   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.716629   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:18.716637   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:18.716696   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:18.755280   65164 cri.go:89] found id: ""
	I0920 22:23:18.755310   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.755321   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:18.755329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:18.755390   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:18.791915   65164 cri.go:89] found id: ""
	I0920 22:23:18.791944   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.791955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:18.791963   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:18.792033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:18.831544   65164 cri.go:89] found id: ""
	I0920 22:23:18.831568   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.831576   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:18.831584   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:18.831593   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.882024   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:18.882059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:18.898191   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:18.898226   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:18.970613   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:18.970643   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:18.970657   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:19.046112   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:19.046145   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:18.931618   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.431146   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:20.278885   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:22.777640   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.366031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:23.865349   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.586078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:21.599467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:21.599541   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:21.638708   65164 cri.go:89] found id: ""
	I0920 22:23:21.638733   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.638741   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:21.638747   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:21.638851   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:21.680546   65164 cri.go:89] found id: ""
	I0920 22:23:21.680574   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.680583   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:21.680589   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:21.680643   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:21.724122   65164 cri.go:89] found id: ""
	I0920 22:23:21.724149   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.724160   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:21.724167   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:21.724224   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:21.758447   65164 cri.go:89] found id: ""
	I0920 22:23:21.758474   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.758482   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:21.758488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:21.758554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:21.799562   65164 cri.go:89] found id: ""
	I0920 22:23:21.799586   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.799594   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:21.799600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:21.799658   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:21.836768   65164 cri.go:89] found id: ""
	I0920 22:23:21.836799   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.836808   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:21.836814   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:21.836871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:21.876750   65164 cri.go:89] found id: ""
	I0920 22:23:21.876770   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.876778   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:21.876784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:21.876873   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:21.912756   65164 cri.go:89] found id: ""
	I0920 22:23:21.912783   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.912792   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:21.912800   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:21.912813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:21.990168   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:21.990191   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:21.990202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:22.094757   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:22.094797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:22.152247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:22.152280   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:22.205505   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:22.205543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:24.720784   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:24.734305   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:24.734369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:24.771142   65164 cri.go:89] found id: ""
	I0920 22:23:24.771171   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.771182   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:24.771190   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:24.771248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:24.807182   65164 cri.go:89] found id: ""
	I0920 22:23:24.807208   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.807216   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:24.807221   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:24.807278   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:24.840805   65164 cri.go:89] found id: ""
	I0920 22:23:24.840836   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.840847   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:24.840854   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:24.840910   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:24.877583   65164 cri.go:89] found id: ""
	I0920 22:23:24.877609   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.877617   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:24.877623   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:24.877677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:24.913485   65164 cri.go:89] found id: ""
	I0920 22:23:24.913518   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.913529   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:24.913536   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:24.913594   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:24.949441   65164 cri.go:89] found id: ""
	I0920 22:23:24.949466   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.949475   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:24.949481   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:24.949527   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:24.985355   65164 cri.go:89] found id: ""
	I0920 22:23:24.985377   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.985386   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:24.985391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:24.985442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:25.021343   65164 cri.go:89] found id: ""
	I0920 22:23:25.021367   65164 logs.go:276] 0 containers: []
	W0920 22:23:25.021375   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:25.021383   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:25.021393   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:25.063475   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:25.063509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:25.113397   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:25.113436   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:25.127946   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:25.127981   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:25.202313   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:25.202334   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:25.202349   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:23.431756   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:25.929621   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:24.778308   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.277297   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:25.865413   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.866883   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.784353   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:27.800363   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:27.800439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:27.861988   65164 cri.go:89] found id: ""
	I0920 22:23:27.862016   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.862026   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:27.862034   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:27.862093   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:27.925087   65164 cri.go:89] found id: ""
	I0920 22:23:27.925114   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.925126   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:27.925133   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:27.925194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:27.959757   65164 cri.go:89] found id: ""
	I0920 22:23:27.959785   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.959796   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:27.959804   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:27.959857   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:27.998853   65164 cri.go:89] found id: ""
	I0920 22:23:27.998881   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.998891   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:27.998898   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:27.998955   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:28.034316   65164 cri.go:89] found id: ""
	I0920 22:23:28.034338   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.034347   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:28.034352   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:28.034402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:28.068834   65164 cri.go:89] found id: ""
	I0920 22:23:28.068857   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.068865   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:28.068871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:28.068921   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:28.103599   65164 cri.go:89] found id: ""
	I0920 22:23:28.103631   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.103642   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:28.103649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:28.103713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:28.139211   65164 cri.go:89] found id: ""
	I0920 22:23:28.139235   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.139243   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:28.139250   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:28.139261   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:28.191788   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:28.191825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:28.208453   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:28.208487   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:28.280469   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:28.280488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:28.280500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:28.357234   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:28.357277   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:27.931829   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.430915   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:29.278181   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:31.777506   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.365534   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:32.366025   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:34.864764   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.903301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:30.916515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:30.916595   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:30.950403   65164 cri.go:89] found id: ""
	I0920 22:23:30.950427   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.950435   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:30.950442   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:30.950488   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:30.987361   65164 cri.go:89] found id: ""
	I0920 22:23:30.987383   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.987392   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:30.987398   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:30.987447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:31.025688   65164 cri.go:89] found id: ""
	I0920 22:23:31.025712   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.025720   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:31.025727   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:31.025789   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:31.064227   65164 cri.go:89] found id: ""
	I0920 22:23:31.064253   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.064261   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:31.064268   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:31.064313   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:31.100029   65164 cri.go:89] found id: ""
	I0920 22:23:31.100059   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.100067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:31.100073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:31.100118   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:31.134290   65164 cri.go:89] found id: ""
	I0920 22:23:31.134328   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.134339   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:31.134346   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:31.134404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:31.174453   65164 cri.go:89] found id: ""
	I0920 22:23:31.174483   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.174493   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:31.174500   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:31.174558   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:31.209768   65164 cri.go:89] found id: ""
	I0920 22:23:31.209792   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.209801   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:31.209812   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:31.209823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:31.288640   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:31.288674   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:31.329681   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:31.329707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:31.388046   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:31.388090   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:31.402872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:31.402900   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:31.476528   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:33.977657   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:33.991331   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:33.991407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:34.027932   65164 cri.go:89] found id: ""
	I0920 22:23:34.027960   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.027970   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:34.027979   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:34.028036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:34.070369   65164 cri.go:89] found id: ""
	I0920 22:23:34.070395   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.070403   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:34.070411   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:34.070471   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:34.107467   65164 cri.go:89] found id: ""
	I0920 22:23:34.107493   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.107501   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:34.107507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:34.107556   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:34.148177   65164 cri.go:89] found id: ""
	I0920 22:23:34.148203   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.148211   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:34.148217   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:34.148274   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:34.185721   65164 cri.go:89] found id: ""
	I0920 22:23:34.185751   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.185762   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:34.185775   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:34.185834   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:34.221089   65164 cri.go:89] found id: ""
	I0920 22:23:34.221110   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.221118   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:34.221124   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:34.221167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:34.256222   65164 cri.go:89] found id: ""
	I0920 22:23:34.256244   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.256252   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:34.256257   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:34.256305   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:34.291640   65164 cri.go:89] found id: ""
	I0920 22:23:34.291665   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.291676   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:34.291688   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:34.291702   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:34.342602   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:34.342642   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:34.356770   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:34.356798   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:34.425461   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:34.425488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:34.425503   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:34.505552   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:34.505591   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:32.932537   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:35.433994   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:33.778173   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:35.778427   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:37.778562   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:36.866421   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:39.364982   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:37.045098   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:37.058954   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:37.059047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:37.094828   65164 cri.go:89] found id: ""
	I0920 22:23:37.094857   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.094867   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:37.094875   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:37.094935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:37.133943   65164 cri.go:89] found id: ""
	I0920 22:23:37.133975   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.133985   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:37.133996   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:37.134056   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:37.171931   65164 cri.go:89] found id: ""
	I0920 22:23:37.171976   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.171988   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:37.171996   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:37.172064   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:37.209342   65164 cri.go:89] found id: ""
	I0920 22:23:37.209367   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.209375   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:37.209381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:37.209431   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:37.250832   65164 cri.go:89] found id: ""
	I0920 22:23:37.250866   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.250877   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:37.250885   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:37.250984   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:37.292507   65164 cri.go:89] found id: ""
	I0920 22:23:37.292533   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.292541   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:37.292547   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:37.292603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:37.326576   65164 cri.go:89] found id: ""
	I0920 22:23:37.326604   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.326612   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:37.326618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:37.326675   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:37.362983   65164 cri.go:89] found id: ""
	I0920 22:23:37.363010   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.363020   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:37.363030   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:37.363043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:37.418526   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:37.418564   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:37.434155   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:37.434180   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:37.521264   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:37.521292   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:37.521303   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:37.603628   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:37.603663   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:40.146295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:40.160861   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:40.160932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:40.194739   65164 cri.go:89] found id: ""
	I0920 22:23:40.194762   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.194769   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:40.194776   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:40.194836   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:40.230753   65164 cri.go:89] found id: ""
	I0920 22:23:40.230781   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.230792   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:40.230800   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:40.230860   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:40.268363   65164 cri.go:89] found id: ""
	I0920 22:23:40.268392   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.268403   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:40.268411   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:40.268468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:40.307882   65164 cri.go:89] found id: ""
	I0920 22:23:40.307916   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.307925   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:40.307932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:40.307993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:40.344334   65164 cri.go:89] found id: ""
	I0920 22:23:40.344367   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.344378   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:40.344385   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:40.344445   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:40.385113   65164 cri.go:89] found id: ""
	I0920 22:23:40.385146   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.385157   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:40.385168   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:40.385269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:40.422532   65164 cri.go:89] found id: ""
	I0920 22:23:40.422557   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.422568   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:40.422576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:40.422635   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:40.463425   65164 cri.go:89] found id: ""
	I0920 22:23:40.463451   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.463462   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:40.463472   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:40.463486   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:40.518314   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:40.518350   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:40.534689   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:40.534718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:40.610484   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:40.610509   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:40.610525   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:40.697040   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:40.697076   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:37.930996   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:40.429855   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:42.431458   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:40.279291   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:42.778158   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:41.366031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:43.367298   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:43.240502   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:43.254888   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:43.254989   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:43.294646   65164 cri.go:89] found id: ""
	I0920 22:23:43.294672   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.294686   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:43.294699   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:43.294763   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:43.330802   65164 cri.go:89] found id: ""
	I0920 22:23:43.330828   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.330839   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:43.330846   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:43.330911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:43.369668   65164 cri.go:89] found id: ""
	I0920 22:23:43.369692   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.369702   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:43.369710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:43.369780   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:43.409992   65164 cri.go:89] found id: ""
	I0920 22:23:43.410032   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.410045   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:43.410054   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:43.410116   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:43.451838   65164 cri.go:89] found id: ""
	I0920 22:23:43.451870   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.451882   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:43.451889   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:43.451945   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:43.489973   65164 cri.go:89] found id: ""
	I0920 22:23:43.490010   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.490021   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:43.490027   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:43.490087   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:43.529027   65164 cri.go:89] found id: ""
	I0920 22:23:43.529051   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.529060   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:43.529066   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:43.529121   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:43.567324   65164 cri.go:89] found id: ""
	I0920 22:23:43.567354   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.567366   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:43.567377   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:43.567390   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:43.618859   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:43.618899   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:43.632929   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:43.632970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:43.713084   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:43.713108   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:43.713123   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:43.795977   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:43.796018   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:44.931817   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:46.940596   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:44.778817   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:47.277419   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:45.866458   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:48.365654   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:46.337737   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:46.352031   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:46.352117   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:46.391292   65164 cri.go:89] found id: ""
	I0920 22:23:46.391316   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.391328   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:46.391335   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:46.391397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:46.432554   65164 cri.go:89] found id: ""
	I0920 22:23:46.432576   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.432584   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:46.432590   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:46.432669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:46.469609   65164 cri.go:89] found id: ""
	I0920 22:23:46.469638   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.469649   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:46.469657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:46.469718   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:46.509671   65164 cri.go:89] found id: ""
	I0920 22:23:46.509702   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.509714   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:46.509721   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:46.509791   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:46.549707   65164 cri.go:89] found id: ""
	I0920 22:23:46.549737   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.549747   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:46.549755   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:46.549825   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:46.591886   65164 cri.go:89] found id: ""
	I0920 22:23:46.591914   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.591924   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:46.591931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:46.591982   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:46.629358   65164 cri.go:89] found id: ""
	I0920 22:23:46.629384   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.629391   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:46.629397   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:46.629447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:46.664854   65164 cri.go:89] found id: ""
	I0920 22:23:46.664880   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.664888   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:46.664897   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:46.664908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:46.706941   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:46.706994   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:46.760617   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:46.760651   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:46.774526   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:46.774552   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:46.841947   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:46.841977   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:46.841995   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.423720   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:49.439375   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:49.439466   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:49.476032   65164 cri.go:89] found id: ""
	I0920 22:23:49.476056   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.476064   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:49.476070   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:49.476129   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:49.513099   65164 cri.go:89] found id: ""
	I0920 22:23:49.513128   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.513138   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:49.513144   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:49.513194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:49.551645   65164 cri.go:89] found id: ""
	I0920 22:23:49.551671   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.551682   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:49.551689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:49.551750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:49.592269   65164 cri.go:89] found id: ""
	I0920 22:23:49.592295   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.592305   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:49.592312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:49.592371   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:49.629934   65164 cri.go:89] found id: ""
	I0920 22:23:49.629963   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.629971   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:49.629978   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:49.630033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:49.668013   65164 cri.go:89] found id: ""
	I0920 22:23:49.668041   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.668051   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:49.668058   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:49.668119   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:49.706531   65164 cri.go:89] found id: ""
	I0920 22:23:49.706561   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.706572   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:49.706580   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:49.706644   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:49.744130   65164 cri.go:89] found id: ""
	I0920 22:23:49.744168   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.744180   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:49.744192   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:49.744212   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:49.814987   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:49.815015   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:49.815031   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.897468   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:49.897505   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:49.936778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:49.936804   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:49.990840   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:49.990884   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:49.430203   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:51.930280   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:49.777628   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.278385   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:50.864440   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.865694   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.505554   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:52.520975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:52.521043   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:52.565302   65164 cri.go:89] found id: ""
	I0920 22:23:52.565331   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.565340   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:52.565346   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:52.565404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:52.606315   65164 cri.go:89] found id: ""
	I0920 22:23:52.606342   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.606352   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:52.606359   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:52.606416   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:52.641533   65164 cri.go:89] found id: ""
	I0920 22:23:52.641560   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.641574   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:52.641581   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:52.641640   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:52.677372   65164 cri.go:89] found id: ""
	I0920 22:23:52.677400   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.677412   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:52.677419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:52.677479   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:52.714016   65164 cri.go:89] found id: ""
	I0920 22:23:52.714056   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.714067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:52.714075   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:52.714131   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:52.748194   65164 cri.go:89] found id: ""
	I0920 22:23:52.748224   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.748235   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:52.748243   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:52.748306   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:52.785329   65164 cri.go:89] found id: ""
	I0920 22:23:52.785356   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.785366   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:52.785373   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:52.785430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:52.818350   65164 cri.go:89] found id: ""
	I0920 22:23:52.818382   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.818401   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:52.818414   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:52.818429   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:52.862819   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:52.862854   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:52.914773   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:52.914811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:52.928560   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:52.928589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:53.005530   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:53.005553   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:53.005568   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:55.589457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:55.604545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:55.604634   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:55.643361   65164 cri.go:89] found id: ""
	I0920 22:23:55.643385   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.643394   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:55.643404   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:55.643458   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:55.678738   65164 cri.go:89] found id: ""
	I0920 22:23:55.678762   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.678770   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:55.678775   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:55.678820   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:55.714107   65164 cri.go:89] found id: ""
	I0920 22:23:55.714137   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.714153   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:55.714160   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:55.714221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:55.749093   65164 cri.go:89] found id: ""
	I0920 22:23:55.749118   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.749127   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:55.749135   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:55.749194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:55.786138   65164 cri.go:89] found id: ""
	I0920 22:23:55.786166   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.786177   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:55.786185   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:55.786243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:55.820406   65164 cri.go:89] found id: ""
	I0920 22:23:55.820431   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.820440   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:55.820446   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:55.820509   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:55.854792   65164 cri.go:89] found id: ""
	I0920 22:23:55.854820   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.854829   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:55.854834   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:55.854889   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:53.931782   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:56.430505   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:54.278800   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:56.779261   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:55.368778   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:57.865901   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:55.891926   65164 cri.go:89] found id: ""
	I0920 22:23:55.891957   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.891967   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:55.891977   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:55.891991   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:55.947642   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:55.947677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:55.961872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:55.961903   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:56.038004   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:56.038035   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:56.038049   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:56.129637   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:56.129682   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:58.670325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:58.685501   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:58.685564   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:58.728232   65164 cri.go:89] found id: ""
	I0920 22:23:58.728262   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.728274   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:58.728282   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:58.728339   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:58.762617   65164 cri.go:89] found id: ""
	I0920 22:23:58.762647   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.762657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:58.762665   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:58.762727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:58.799427   65164 cri.go:89] found id: ""
	I0920 22:23:58.799454   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.799464   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:58.799471   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:58.799533   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:58.840100   65164 cri.go:89] found id: ""
	I0920 22:23:58.840126   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.840133   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:58.840140   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:58.840202   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:58.879097   65164 cri.go:89] found id: ""
	I0920 22:23:58.879123   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.879131   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:58.879137   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:58.879185   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:58.914839   65164 cri.go:89] found id: ""
	I0920 22:23:58.914879   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.914889   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:58.914897   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:58.914954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:58.957890   65164 cri.go:89] found id: ""
	I0920 22:23:58.957917   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.957925   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:58.957931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:58.957992   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:58.996355   65164 cri.go:89] found id: ""
	I0920 22:23:58.996380   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.996390   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:58.996400   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:58.996419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:59.010139   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:59.010162   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:59.083543   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:59.083567   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:59.083584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:59.158430   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:59.158470   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:59.199488   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:59.199515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:58.430584   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:00.431120   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:59.278671   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:01.777131   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:00.366154   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:02.866892   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:01.755931   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:01.770610   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:01.770684   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:01.806498   65164 cri.go:89] found id: ""
	I0920 22:24:01.806532   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.806544   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:01.806552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:01.806611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:01.849461   65164 cri.go:89] found id: ""
	I0920 22:24:01.849495   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.849505   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:01.849512   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:01.849570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:01.886634   65164 cri.go:89] found id: ""
	I0920 22:24:01.886674   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.886686   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:01.886693   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:01.886756   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:01.921036   65164 cri.go:89] found id: ""
	I0920 22:24:01.921064   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.921072   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:01.921079   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:01.921138   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:01.959066   65164 cri.go:89] found id: ""
	I0920 22:24:01.959097   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.959109   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:01.959116   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:01.959182   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:01.997105   65164 cri.go:89] found id: ""
	I0920 22:24:01.997135   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.997146   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:01.997155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:01.997215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:02.039220   65164 cri.go:89] found id: ""
	I0920 22:24:02.039245   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.039254   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:02.039259   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:02.039318   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:02.080646   65164 cri.go:89] found id: ""
	I0920 22:24:02.080675   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.080684   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:02.080691   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:02.080704   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:02.132408   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:02.132442   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:02.147039   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:02.147065   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:02.226357   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:02.226383   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:02.226402   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:02.310492   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:02.310527   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:04.854325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:04.869319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:04.869386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:04.904951   65164 cri.go:89] found id: ""
	I0920 22:24:04.904981   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.904992   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:04.905001   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:04.905062   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:04.942283   65164 cri.go:89] found id: ""
	I0920 22:24:04.942307   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.942314   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:04.942319   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:04.942366   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:04.981814   65164 cri.go:89] found id: ""
	I0920 22:24:04.981841   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.981851   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:04.981858   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:04.981914   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:05.016282   65164 cri.go:89] found id: ""
	I0920 22:24:05.016307   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.016316   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:05.016321   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:05.016369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:05.059185   65164 cri.go:89] found id: ""
	I0920 22:24:05.059215   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.059226   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:05.059233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:05.059286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:05.097814   65164 cri.go:89] found id: ""
	I0920 22:24:05.097836   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.097845   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:05.097851   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:05.097904   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:05.133890   65164 cri.go:89] found id: ""
	I0920 22:24:05.133918   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.133926   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:05.133932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:05.133993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:05.170616   65164 cri.go:89] found id: ""
	I0920 22:24:05.170641   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.170649   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:05.170657   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:05.170668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:05.221889   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:05.221921   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:05.235836   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:05.235863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:05.312145   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:05.312172   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:05.312188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:05.397092   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:05.397128   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:02.930186   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:04.931380   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.429840   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:03.778224   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:06.278765   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:08.279756   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:05.365583   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.865287   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:09.865560   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.938892   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:07.954037   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:07.954103   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:07.992847   65164 cri.go:89] found id: ""
	I0920 22:24:07.992872   65164 logs.go:276] 0 containers: []
	W0920 22:24:07.992880   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:07.992887   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:07.992971   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:08.030102   65164 cri.go:89] found id: ""
	I0920 22:24:08.030136   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.030148   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:08.030155   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:08.030215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:08.071156   65164 cri.go:89] found id: ""
	I0920 22:24:08.071187   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.071202   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:08.071209   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:08.071264   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:08.111668   65164 cri.go:89] found id: ""
	I0920 22:24:08.111699   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.111712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:08.111719   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:08.111777   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:08.148289   65164 cri.go:89] found id: ""
	I0920 22:24:08.148322   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.148333   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:08.148340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:08.148402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:08.189154   65164 cri.go:89] found id: ""
	I0920 22:24:08.189176   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.189184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:08.189191   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:08.189242   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:08.225338   65164 cri.go:89] found id: ""
	I0920 22:24:08.225363   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.225374   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:08.225381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:08.225439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:08.274986   65164 cri.go:89] found id: ""
	I0920 22:24:08.275015   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.275026   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:08.275036   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:08.275052   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:08.376769   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:08.376801   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:08.376819   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:08.455971   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:08.456008   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:08.496996   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:08.497026   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:08.549634   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:08.549669   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:09.930508   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:12.430716   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:10.778077   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:13.278116   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:12.365884   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:14.866612   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:11.065317   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:11.079438   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:11.079521   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:11.113628   65164 cri.go:89] found id: ""
	I0920 22:24:11.113657   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.113673   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:11.113680   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:11.113741   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:11.147698   65164 cri.go:89] found id: ""
	I0920 22:24:11.147727   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.147738   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:11.147745   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:11.147810   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:11.185153   65164 cri.go:89] found id: ""
	I0920 22:24:11.185183   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.185194   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:11.185202   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:11.185261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:11.220575   65164 cri.go:89] found id: ""
	I0920 22:24:11.220602   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.220611   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:11.220619   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:11.220677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:11.257062   65164 cri.go:89] found id: ""
	I0920 22:24:11.257095   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.257107   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:11.257117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:11.257174   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:11.295945   65164 cri.go:89] found id: ""
	I0920 22:24:11.295972   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.295982   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:11.295987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:11.296037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:11.335049   65164 cri.go:89] found id: ""
	I0920 22:24:11.335072   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.335080   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:11.335085   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:11.335134   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:11.377047   65164 cri.go:89] found id: ""
	I0920 22:24:11.377074   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.377085   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:11.377094   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:11.377108   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:11.430338   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:11.430364   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:11.445225   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:11.445249   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:11.517988   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:11.518010   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:11.518024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:11.596776   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:11.596813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:14.140982   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:14.158445   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:14.158524   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:14.208478   65164 cri.go:89] found id: ""
	I0920 22:24:14.208508   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.208518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:14.208526   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:14.208585   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:14.266224   65164 cri.go:89] found id: ""
	I0920 22:24:14.266256   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.266268   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:14.266276   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:14.266334   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:14.309917   65164 cri.go:89] found id: ""
	I0920 22:24:14.309943   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.309954   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:14.309968   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:14.310026   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:14.348491   65164 cri.go:89] found id: ""
	I0920 22:24:14.348516   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.348523   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:14.348530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:14.348575   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:14.389170   65164 cri.go:89] found id: ""
	I0920 22:24:14.389197   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.389205   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:14.389212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:14.389261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:14.426006   65164 cri.go:89] found id: ""
	I0920 22:24:14.426034   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.426045   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:14.426052   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:14.426113   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:14.461496   65164 cri.go:89] found id: ""
	I0920 22:24:14.461519   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.461526   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:14.461532   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:14.461588   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:14.501424   65164 cri.go:89] found id: ""
	I0920 22:24:14.501457   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.501474   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:14.501486   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:14.501499   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:14.550920   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:14.550954   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:14.565293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:14.565322   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:14.638031   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:14.638059   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:14.638071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:14.715631   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:14.715697   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:14.430748   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:16.930571   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:15.777268   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.778385   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.365248   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:19.366168   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.256947   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:17.269939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:17.270009   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:17.305349   65164 cri.go:89] found id: ""
	I0920 22:24:17.305373   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.305384   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:17.305391   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:17.305453   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:17.347917   65164 cri.go:89] found id: ""
	I0920 22:24:17.347947   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.347959   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:17.347967   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:17.348024   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:17.383366   65164 cri.go:89] found id: ""
	I0920 22:24:17.383387   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.383396   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:17.383402   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:17.383462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:17.418055   65164 cri.go:89] found id: ""
	I0920 22:24:17.418079   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.418086   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:17.418092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:17.418156   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:17.457651   65164 cri.go:89] found id: ""
	I0920 22:24:17.457680   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.457691   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:17.457698   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:17.457758   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:17.492527   65164 cri.go:89] found id: ""
	I0920 22:24:17.492557   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.492568   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:17.492576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:17.492627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:17.528964   65164 cri.go:89] found id: ""
	I0920 22:24:17.528993   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.529004   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:17.529017   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:17.529080   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:17.565286   65164 cri.go:89] found id: ""
	I0920 22:24:17.565315   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.565326   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:17.565336   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:17.565351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:17.617255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:17.617290   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:17.631897   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:17.631925   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:17.700829   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:17.700851   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:17.700863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:17.782597   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:17.782633   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.327305   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:20.340641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:20.340731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:20.378129   65164 cri.go:89] found id: ""
	I0920 22:24:20.378156   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.378167   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:20.378174   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:20.378228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:20.414202   65164 cri.go:89] found id: ""
	I0920 22:24:20.414230   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.414240   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:20.414247   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:20.414307   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:20.451599   65164 cri.go:89] found id: ""
	I0920 22:24:20.451631   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.451641   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:20.451649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:20.451708   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:20.486998   65164 cri.go:89] found id: ""
	I0920 22:24:20.487026   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.487034   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:20.487040   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:20.487091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:20.525623   65164 cri.go:89] found id: ""
	I0920 22:24:20.525651   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.525661   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:20.525669   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:20.525727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:20.560930   65164 cri.go:89] found id: ""
	I0920 22:24:20.560956   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.560967   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:20.560975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:20.561037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:20.598358   65164 cri.go:89] found id: ""
	I0920 22:24:20.598385   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.598396   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:20.598403   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:20.598468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:20.633645   65164 cri.go:89] found id: ""
	I0920 22:24:20.633671   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.633683   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:20.633694   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:20.633707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.678847   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:20.678880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:20.731758   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:20.731803   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:20.747298   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:20.747328   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:20.822433   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:20.822459   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:20.822473   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:18.932797   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:21.430477   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:19.778546   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:22.277786   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:21.866777   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:24.365161   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:23.401037   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:23.414055   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:23.414135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:23.452483   65164 cri.go:89] found id: ""
	I0920 22:24:23.452511   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.452522   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:23.452529   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:23.452587   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:23.491951   65164 cri.go:89] found id: ""
	I0920 22:24:23.491979   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.491987   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:23.491993   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:23.492039   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:23.533381   65164 cri.go:89] found id: ""
	I0920 22:24:23.533413   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.533424   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:23.533432   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:23.533492   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:23.572442   65164 cri.go:89] found id: ""
	I0920 22:24:23.572475   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.572485   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:23.572493   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:23.572554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:23.609161   65164 cri.go:89] found id: ""
	I0920 22:24:23.609189   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.609200   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:23.609207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:23.609272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:23.645692   65164 cri.go:89] found id: ""
	I0920 22:24:23.645721   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.645730   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:23.645737   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:23.645824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:23.681332   65164 cri.go:89] found id: ""
	I0920 22:24:23.681355   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.681363   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:23.681368   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:23.681414   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:23.716497   65164 cri.go:89] found id: ""
	I0920 22:24:23.716524   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.716532   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:23.716541   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:23.716551   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:23.794446   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:23.794485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:23.839598   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:23.839625   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:23.893259   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:23.893295   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:23.907430   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:23.907456   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:23.982193   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:23.431805   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:25.430647   64543 pod_ready.go:82] duration metric: took 4m0.006693945s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	E0920 22:24:25.430668   64543 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 22:24:25.430675   64543 pod_ready.go:39] duration metric: took 4m7.427029914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:24:25.430689   64543 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:24:25.430712   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:25.430753   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:25.478295   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:25.478324   64543 cri.go:89] found id: ""
	I0920 22:24:25.478334   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:25.478395   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.483530   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:25.483600   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:25.525758   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:25.525791   64543 cri.go:89] found id: ""
	I0920 22:24:25.525801   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:25.525855   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.530200   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:25.530272   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:25.581889   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:25.581911   64543 cri.go:89] found id: ""
	I0920 22:24:25.581920   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:25.581976   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.587886   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:25.587948   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:25.634412   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:25.634438   64543 cri.go:89] found id: ""
	I0920 22:24:25.634448   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:25.634508   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.638844   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:25.638907   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:25.674015   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:25.674040   64543 cri.go:89] found id: ""
	I0920 22:24:25.674049   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:25.674109   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.678865   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:25.678935   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:25.715353   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:25.715380   64543 cri.go:89] found id: ""
	I0920 22:24:25.715390   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:25.715445   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.719967   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:25.720038   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:25.759899   64543 cri.go:89] found id: ""
	I0920 22:24:25.759925   64543 logs.go:276] 0 containers: []
	W0920 22:24:25.759933   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:25.759938   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:25.759996   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:25.805111   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:25.805139   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:25.805146   64543 cri.go:89] found id: ""
	I0920 22:24:25.805154   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:25.805225   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.809639   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.813697   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:25.813722   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:25.853582   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:25.853610   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:25.893403   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:25.893450   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:25.939559   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:25.939601   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:25.957288   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:25.957322   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:26.007398   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:26.007427   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:26.048801   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:26.048828   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:26.105292   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:26.105327   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:26.141333   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:26.141367   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:26.659413   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:26.659458   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:26.719990   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:26.720018   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:26.797434   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:26.797480   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:26.956602   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:26.956645   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:24.278169   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.278275   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:28.278844   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.365203   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:28.366087   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.482877   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:26.497511   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:26.497590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:26.534711   65164 cri.go:89] found id: ""
	I0920 22:24:26.534732   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.534739   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:26.534745   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:26.534790   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:26.569664   65164 cri.go:89] found id: ""
	I0920 22:24:26.569696   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.569706   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:26.569714   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:26.569772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:26.610623   65164 cri.go:89] found id: ""
	I0920 22:24:26.610651   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.610663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:26.610671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:26.610726   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:26.648337   65164 cri.go:89] found id: ""
	I0920 22:24:26.648368   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.648379   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:26.648387   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:26.648446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:26.687633   65164 cri.go:89] found id: ""
	I0920 22:24:26.687661   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.687673   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:26.687680   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:26.687752   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:26.724361   65164 cri.go:89] found id: ""
	I0920 22:24:26.724390   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.724410   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:26.724419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:26.724475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:26.766101   65164 cri.go:89] found id: ""
	I0920 22:24:26.766127   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.766137   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:26.766144   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:26.766196   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:26.805844   65164 cri.go:89] found id: ""
	I0920 22:24:26.805872   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.805883   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:26.805894   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:26.805907   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:26.880703   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:26.880725   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:26.880740   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:26.967599   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:26.967635   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:27.017359   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:27.017384   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:27.071335   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:27.071375   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:29.588374   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:29.602233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:29.602308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:29.639528   65164 cri.go:89] found id: ""
	I0920 22:24:29.639551   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.639562   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:29.639570   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:29.639626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:29.677766   65164 cri.go:89] found id: ""
	I0920 22:24:29.677794   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.677810   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:29.677817   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:29.677874   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:29.714616   65164 cri.go:89] found id: ""
	I0920 22:24:29.714652   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.714663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:29.714671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:29.714722   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:29.750760   65164 cri.go:89] found id: ""
	I0920 22:24:29.750787   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.750795   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:29.750802   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:29.750862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:29.791014   65164 cri.go:89] found id: ""
	I0920 22:24:29.791046   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.791057   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:29.791064   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:29.791135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:29.827114   65164 cri.go:89] found id: ""
	I0920 22:24:29.827146   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.827158   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:29.827166   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:29.827221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:29.866063   65164 cri.go:89] found id: ""
	I0920 22:24:29.866087   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.866097   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:29.866105   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:29.866159   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:29.905616   65164 cri.go:89] found id: ""
	I0920 22:24:29.905646   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.905657   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:29.905668   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:29.905681   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:29.990749   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:29.990787   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:30.030931   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:30.030975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:30.084404   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:30.084440   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:30.100878   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:30.100908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:30.175423   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:29.517258   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:29.535431   64543 api_server.go:72] duration metric: took 4m18.790198719s to wait for apiserver process to appear ...
	I0920 22:24:29.535457   64543 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:24:29.535493   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:29.535549   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:29.581802   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:29.581828   64543 cri.go:89] found id: ""
	I0920 22:24:29.581836   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:29.581890   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.590758   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:29.590827   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:29.636548   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:29.636574   64543 cri.go:89] found id: ""
	I0920 22:24:29.636583   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:29.636644   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.641309   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:29.641378   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:29.686989   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:29.687017   64543 cri.go:89] found id: ""
	I0920 22:24:29.687026   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:29.687086   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.691971   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:29.692056   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:29.733852   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:29.733880   64543 cri.go:89] found id: ""
	I0920 22:24:29.733890   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:29.733954   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.738246   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:29.738322   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:29.786495   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:29.786520   64543 cri.go:89] found id: ""
	I0920 22:24:29.786527   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:29.786580   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.791112   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:29.791170   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:29.836075   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:29.836098   64543 cri.go:89] found id: ""
	I0920 22:24:29.836106   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:29.836150   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.840577   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:29.840652   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:29.882111   64543 cri.go:89] found id: ""
	I0920 22:24:29.882142   64543 logs.go:276] 0 containers: []
	W0920 22:24:29.882154   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:29.882161   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:29.882247   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:29.926138   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:29.926163   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:29.926169   64543 cri.go:89] found id: ""
	I0920 22:24:29.926177   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:29.926235   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.930772   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.934923   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:29.934952   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:30.052134   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:30.052171   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:30.101772   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:30.101795   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:30.143728   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:30.143760   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:30.189176   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:30.189204   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:30.246393   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:30.246423   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:30.289832   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:30.289864   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:30.712613   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:30.712648   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:30.783930   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:30.783962   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:30.838078   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:30.838112   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:30.889200   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:30.889235   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:30.924455   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:30.924492   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:30.961916   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:30.961945   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:30.278927   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.777947   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:30.867260   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.868094   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.675815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:32.689449   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:32.689517   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:32.725045   65164 cri.go:89] found id: ""
	I0920 22:24:32.725072   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.725080   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:32.725086   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:32.725139   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:32.760529   65164 cri.go:89] found id: ""
	I0920 22:24:32.760555   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.760566   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:32.760572   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:32.760629   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:32.795500   65164 cri.go:89] found id: ""
	I0920 22:24:32.795526   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.795536   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:32.795543   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:32.795604   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:32.834854   65164 cri.go:89] found id: ""
	I0920 22:24:32.834884   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.834895   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:32.834904   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:32.834972   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:32.875928   65164 cri.go:89] found id: ""
	I0920 22:24:32.875956   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.875967   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:32.875975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:32.876038   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:32.910115   65164 cri.go:89] found id: ""
	I0920 22:24:32.910140   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.910149   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:32.910155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:32.910204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:32.944790   65164 cri.go:89] found id: ""
	I0920 22:24:32.944817   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.944828   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:32.944835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:32.944896   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:32.980012   65164 cri.go:89] found id: ""
	I0920 22:24:32.980043   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.980054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:32.980064   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:32.980077   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:33.032827   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:33.032864   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:33.047478   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:33.047507   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:33.124253   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:33.124280   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:33.124294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:33.200625   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:33.200666   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:35.740556   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:35.755559   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:35.755626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:35.796164   65164 cri.go:89] found id: ""
	I0920 22:24:35.796191   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.796200   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:35.796211   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:35.796260   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:35.832565   65164 cri.go:89] found id: ""
	I0920 22:24:35.832591   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.832601   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:35.832609   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:35.832678   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:33.481353   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:24:33.486802   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 200:
	ok
	I0920 22:24:33.487816   64543 api_server.go:141] control plane version: v1.31.1
	I0920 22:24:33.487835   64543 api_server.go:131] duration metric: took 3.952371649s to wait for apiserver health ...
	I0920 22:24:33.487842   64543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:24:33.487865   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:33.487917   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:33.528958   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:33.528989   64543 cri.go:89] found id: ""
	I0920 22:24:33.528999   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:33.529061   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.533727   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:33.533802   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:33.573912   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:33.573937   64543 cri.go:89] found id: ""
	I0920 22:24:33.573947   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:33.574016   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.578918   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:33.578998   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:33.628384   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:33.628416   64543 cri.go:89] found id: ""
	I0920 22:24:33.628427   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:33.628490   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.632875   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:33.632955   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:33.674561   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:33.674588   64543 cri.go:89] found id: ""
	I0920 22:24:33.674597   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:33.674654   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.679984   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:33.680055   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:33.715881   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:33.715906   64543 cri.go:89] found id: ""
	I0920 22:24:33.715914   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:33.715963   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.720339   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:33.720408   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:33.764661   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:33.764680   64543 cri.go:89] found id: ""
	I0920 22:24:33.764687   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:33.764734   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.769479   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:33.769533   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:33.810893   64543 cri.go:89] found id: ""
	I0920 22:24:33.810920   64543 logs.go:276] 0 containers: []
	W0920 22:24:33.810932   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:33.810950   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:33.811023   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:33.848882   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:33.848904   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:33.848908   64543 cri.go:89] found id: ""
	I0920 22:24:33.848914   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:33.848964   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.853597   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.858054   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:33.858074   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:33.908602   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:33.908636   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:33.953823   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:33.953850   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:33.993841   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:33.993868   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:34.009784   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:34.009808   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:34.134468   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:34.134496   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:34.178737   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:34.178772   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:34.218475   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:34.218519   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:34.261246   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:34.261272   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:34.324980   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:34.325018   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:34.364223   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:34.364249   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:34.730213   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:34.730262   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:34.808747   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:34.808786   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:37.357880   64543 system_pods.go:59] 8 kube-system pods found
	I0920 22:24:37.357914   64543 system_pods.go:61] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running
	I0920 22:24:37.357919   64543 system_pods.go:61] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running
	I0920 22:24:37.357923   64543 system_pods.go:61] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running
	I0920 22:24:37.357928   64543 system_pods.go:61] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running
	I0920 22:24:37.357933   64543 system_pods.go:61] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:24:37.357938   64543 system_pods.go:61] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running
	I0920 22:24:37.357946   64543 system_pods.go:61] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:24:37.357960   64543 system_pods.go:61] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:24:37.357969   64543 system_pods.go:74] duration metric: took 3.870121418s to wait for pod list to return data ...
	I0920 22:24:37.357979   64543 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:24:37.361128   64543 default_sa.go:45] found service account: "default"
	I0920 22:24:37.361152   64543 default_sa.go:55] duration metric: took 3.165479ms for default service account to be created ...
	I0920 22:24:37.361160   64543 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:24:37.366724   64543 system_pods.go:86] 8 kube-system pods found
	I0920 22:24:37.366755   64543 system_pods.go:89] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running
	I0920 22:24:37.366764   64543 system_pods.go:89] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running
	I0920 22:24:37.366771   64543 system_pods.go:89] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running
	I0920 22:24:37.366777   64543 system_pods.go:89] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running
	I0920 22:24:37.366781   64543 system_pods.go:89] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:24:37.366785   64543 system_pods.go:89] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running
	I0920 22:24:37.366791   64543 system_pods.go:89] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:24:37.366795   64543 system_pods.go:89] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:24:37.366803   64543 system_pods.go:126] duration metric: took 5.637987ms to wait for k8s-apps to be running ...
	I0920 22:24:37.366809   64543 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:24:37.366865   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:24:37.385292   64543 system_svc.go:56] duration metric: took 18.47174ms WaitForService to wait for kubelet
	I0920 22:24:37.385326   64543 kubeadm.go:582] duration metric: took 4m26.64009895s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:24:37.385352   64543 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:24:37.388635   64543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:24:37.388661   64543 node_conditions.go:123] node cpu capacity is 2
	I0920 22:24:37.388675   64543 node_conditions.go:105] duration metric: took 3.31714ms to run NodePressure ...
	I0920 22:24:37.388690   64543 start.go:241] waiting for startup goroutines ...
	I0920 22:24:37.388699   64543 start.go:246] waiting for cluster config update ...
	I0920 22:24:37.388717   64543 start.go:255] writing updated cluster config ...
	I0920 22:24:37.389073   64543 ssh_runner.go:195] Run: rm -f paused
	I0920 22:24:37.442347   64543 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:24:37.443855   64543 out.go:177] * Done! kubectl is now configured to use "embed-certs-562750" cluster and "default" namespace by default
	I0920 22:24:34.779041   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:36.779446   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:35.365819   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:37.865951   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:35.868115   65164 cri.go:89] found id: ""
	I0920 22:24:35.868138   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.868145   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:35.868152   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:35.868199   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:35.902417   65164 cri.go:89] found id: ""
	I0920 22:24:35.902447   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.902457   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:35.902465   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:35.902531   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:35.939195   65164 cri.go:89] found id: ""
	I0920 22:24:35.939226   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.939242   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:35.939256   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:35.939317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:35.981364   65164 cri.go:89] found id: ""
	I0920 22:24:35.981387   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.981397   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:35.981405   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:35.981462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:36.019938   65164 cri.go:89] found id: ""
	I0920 22:24:36.019968   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.019979   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:36.019987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:36.020049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:36.058764   65164 cri.go:89] found id: ""
	I0920 22:24:36.058788   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.058795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:36.058803   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:36.058812   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:36.098442   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:36.098468   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:36.152150   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:36.152188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:36.168940   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:36.168972   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:36.242112   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:36.242134   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:36.242148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:38.843420   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:38.857665   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:38.857746   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:38.901473   65164 cri.go:89] found id: ""
	I0920 22:24:38.901507   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.901518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:38.901532   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:38.901591   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:38.937980   65164 cri.go:89] found id: ""
	I0920 22:24:38.938012   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.938022   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:38.938030   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:38.938085   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:38.973476   65164 cri.go:89] found id: ""
	I0920 22:24:38.973506   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.973514   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:38.973520   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:38.973579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:39.009144   65164 cri.go:89] found id: ""
	I0920 22:24:39.009177   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.009197   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:39.009205   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:39.009282   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:39.046471   65164 cri.go:89] found id: ""
	I0920 22:24:39.046495   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.046505   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:39.046513   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:39.046570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:39.080274   65164 cri.go:89] found id: ""
	I0920 22:24:39.080303   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.080315   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:39.080323   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:39.080381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:39.117267   65164 cri.go:89] found id: ""
	I0920 22:24:39.117297   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.117307   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:39.117314   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:39.117381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:39.158015   65164 cri.go:89] found id: ""
	I0920 22:24:39.158043   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.158054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:39.158065   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:39.158079   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:39.210324   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:39.210360   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:39.226827   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:39.226857   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:39.299060   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:39.299088   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:39.299157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:39.386377   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:39.386420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:39.277971   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:41.278082   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:43.278136   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:40.365418   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:41.359063   64930 pod_ready.go:82] duration metric: took 4m0.000372202s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" ...
	E0920 22:24:41.359104   64930 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 22:24:41.359120   64930 pod_ready.go:39] duration metric: took 4m12.013463696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:24:41.359144   64930 kubeadm.go:597] duration metric: took 4m19.989072639s to restartPrimaryControlPlane
	W0920 22:24:41.359208   64930 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:24:41.359236   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:24:41.925762   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:41.939840   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:41.939911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:41.979910   65164 cri.go:89] found id: ""
	I0920 22:24:41.979943   65164 logs.go:276] 0 containers: []
	W0920 22:24:41.979964   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:41.979973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:41.980036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:42.018711   65164 cri.go:89] found id: ""
	I0920 22:24:42.018738   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.018747   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:42.018752   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:42.018796   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:42.056718   65164 cri.go:89] found id: ""
	I0920 22:24:42.056741   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.056749   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:42.056756   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:42.056826   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:42.093609   65164 cri.go:89] found id: ""
	I0920 22:24:42.093639   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.093650   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:42.093657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:42.093729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:42.130129   65164 cri.go:89] found id: ""
	I0920 22:24:42.130157   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.130167   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:42.130175   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:42.130233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:42.166747   65164 cri.go:89] found id: ""
	I0920 22:24:42.166796   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.166807   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:42.166815   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:42.166875   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:42.213926   65164 cri.go:89] found id: ""
	I0920 22:24:42.213960   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.213971   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:42.213982   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:42.214047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:42.251645   65164 cri.go:89] found id: ""
	I0920 22:24:42.251671   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.251681   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:42.251693   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:42.251707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:42.265133   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:42.265165   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:42.340018   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:42.340044   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:42.340058   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:42.435006   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:42.435056   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:42.478991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:42.479022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:45.032066   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:45.048658   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:45.048734   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:45.086708   65164 cri.go:89] found id: ""
	I0920 22:24:45.086733   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.086743   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:45.086750   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:45.086815   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:45.124311   65164 cri.go:89] found id: ""
	I0920 22:24:45.124342   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.124353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:45.124360   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:45.124407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:45.164253   65164 cri.go:89] found id: ""
	I0920 22:24:45.164283   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.164293   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:45.164301   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:45.164364   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:45.206382   65164 cri.go:89] found id: ""
	I0920 22:24:45.206412   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.206421   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:45.206429   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:45.206494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:45.241949   65164 cri.go:89] found id: ""
	I0920 22:24:45.241977   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.241986   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:45.241991   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:45.242052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:45.283718   65164 cri.go:89] found id: ""
	I0920 22:24:45.283743   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.283752   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:45.283757   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:45.283812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:45.331523   65164 cri.go:89] found id: ""
	I0920 22:24:45.331564   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.331576   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:45.331585   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:45.331672   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:45.372428   65164 cri.go:89] found id: ""
	I0920 22:24:45.372459   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.372469   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:45.372479   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:45.372496   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:45.387751   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:45.387786   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:45.470354   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:45.470377   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:45.470392   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:45.557865   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:45.557902   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:45.600123   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:45.600153   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:45.278256   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:47.278977   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:48.153277   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:48.167796   65164 kubeadm.go:597] duration metric: took 4m4.375477562s to restartPrimaryControlPlane
	W0920 22:24:48.167875   65164 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:24:48.167906   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:24:48.626343   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:24:48.644488   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:24:48.655891   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:24:48.666885   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:24:48.666910   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:24:48.666985   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:24:48.676979   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:24:48.677048   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:24:48.687890   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:24:48.697890   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:24:48.697955   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:24:48.708412   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.719024   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:24:48.719093   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.729321   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:24:48.739258   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:24:48.739329   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:24:48.749787   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:24:48.825173   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:24:48.825258   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:24:48.982428   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:24:48.982587   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:24:48.982721   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:24:49.194912   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:24:49.196899   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:24:49.196999   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:24:49.197078   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:24:49.197222   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:24:49.197328   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:24:49.197421   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:24:49.197495   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:24:49.197574   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:24:49.197662   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:24:49.197761   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:24:49.197887   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:24:49.197948   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:24:49.198036   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:24:49.468998   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:24:49.613979   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:24:49.868901   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:24:50.071143   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:24:50.093179   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:24:50.094030   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:24:50.094086   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:24:50.232993   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:24:50.234761   65164 out.go:235]   - Booting up control plane ...
	I0920 22:24:50.234871   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:24:50.247358   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:24:50.248714   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:24:50.249690   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:24:50.252874   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:24:49.778284   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:52.277872   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:54.278087   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:56.278439   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:58.777573   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:00.777673   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:02.777879   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.503130   64930 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.143868713s)
	I0920 22:25:07.503224   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:07.519558   64930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:25:07.530384   64930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:25:07.540441   64930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:25:07.540469   64930 kubeadm.go:157] found existing configuration files:
	
	I0920 22:25:07.540521   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 22:25:07.550627   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:25:07.550694   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:25:07.560193   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 22:25:07.569908   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:25:07.569966   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:25:07.580326   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 22:25:07.592893   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:25:07.592969   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:25:07.603183   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 22:25:07.613494   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:25:07.613558   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:25:07.623271   64930 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:25:07.671307   64930 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:25:07.671611   64930 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:25:07.801578   64930 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:25:07.801721   64930 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:25:07.801867   64930 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:25:07.811576   64930 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:25:04.778293   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.278844   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.814524   64930 out.go:235]   - Generating certificates and keys ...
	I0920 22:25:07.814630   64930 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:25:07.814714   64930 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:25:07.814828   64930 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:25:07.814911   64930 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:25:07.815048   64930 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:25:07.815155   64930 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:25:07.815261   64930 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:25:07.815374   64930 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:25:07.815473   64930 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:25:07.815580   64930 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:25:07.815640   64930 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:25:07.815728   64930 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:25:08.129976   64930 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:25:08.305286   64930 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:25:08.489838   64930 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:25:08.656531   64930 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:25:08.909653   64930 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:25:08.910197   64930 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:25:08.913016   64930 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:25:08.914949   64930 out.go:235]   - Booting up control plane ...
	I0920 22:25:08.915080   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:25:08.915160   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:25:08.915571   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:25:08.935196   64930 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:25:08.941127   64930 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:25:08.941188   64930 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:25:09.077238   64930 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:25:09.077420   64930 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:25:09.580067   64930 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.879816ms
	I0920 22:25:09.580155   64930 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:25:09.779112   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:11.779164   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:14.582062   64930 kubeadm.go:310] [api-check] The API server is healthy after 5.001748542s
	I0920 22:25:14.600094   64930 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:25:14.622724   64930 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:25:14.657734   64930 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:25:14.657985   64930 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-799783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:25:14.673021   64930 kubeadm.go:310] [bootstrap-token] Using token: xygl58.yjxbp1o582mz9yse
	I0920 22:25:14.675015   64930 out.go:235]   - Configuring RBAC rules ...
	I0920 22:25:14.675149   64930 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:25:14.682527   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:25:14.696372   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:25:14.700087   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:25:14.703910   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:25:14.707739   64930 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:25:14.991815   64930 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:25:15.451193   64930 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:25:15.990452   64930 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:25:15.990476   64930 kubeadm.go:310] 
	I0920 22:25:15.990551   64930 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:25:15.990561   64930 kubeadm.go:310] 
	I0920 22:25:15.990692   64930 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:25:15.990720   64930 kubeadm.go:310] 
	I0920 22:25:15.990765   64930 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:25:15.990873   64930 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:25:15.990983   64930 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:25:15.990998   64930 kubeadm.go:310] 
	I0920 22:25:15.991082   64930 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:25:15.991091   64930 kubeadm.go:310] 
	I0920 22:25:15.991157   64930 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:25:15.991169   64930 kubeadm.go:310] 
	I0920 22:25:15.991248   64930 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:25:15.991352   64930 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:25:15.991458   64930 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:25:15.991469   64930 kubeadm.go:310] 
	I0920 22:25:15.991587   64930 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:25:15.991695   64930 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:25:15.991705   64930 kubeadm.go:310] 
	I0920 22:25:15.991828   64930 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token xygl58.yjxbp1o582mz9yse \
	I0920 22:25:15.991969   64930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 22:25:15.991996   64930 kubeadm.go:310] 	--control-plane 
	I0920 22:25:15.992004   64930 kubeadm.go:310] 
	I0920 22:25:15.992116   64930 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:25:15.992128   64930 kubeadm.go:310] 
	I0920 22:25:15.992237   64930 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token xygl58.yjxbp1o582mz9yse \
	I0920 22:25:15.992374   64930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 22:25:15.993135   64930 kubeadm.go:310] W0920 22:25:07.638229    2519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:25:15.993525   64930 kubeadm.go:310] W0920 22:25:07.639053    2519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:25:15.993662   64930 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:25:15.993697   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:25:15.993707   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:25:15.995878   64930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:25:13.779628   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:16.279390   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:15.997224   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:25:16.009209   64930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:25:16.056563   64930 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:25:16.056636   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:16.056650   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-799783 minikube.k8s.io/updated_at=2024_09_20T22_25_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=default-k8s-diff-port-799783 minikube.k8s.io/primary=true
	I0920 22:25:16.306668   64930 ops.go:34] apiserver oom_adj: -16
	I0920 22:25:16.306700   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:16.807170   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:17.306777   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:17.807844   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:18.307336   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:18.807397   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:19.307726   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:19.807086   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:20.306789   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:20.440925   64930 kubeadm.go:1113] duration metric: took 4.384353815s to wait for elevateKubeSystemPrivileges
	I0920 22:25:20.440962   64930 kubeadm.go:394] duration metric: took 4m59.125193253s to StartCluster
	I0920 22:25:20.440983   64930 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:25:20.441088   64930 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:25:20.442725   64930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:25:20.443037   64930 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:25:20.443113   64930 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:25:20.443191   64930 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443209   64930 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-799783"
	W0920 22:25:20.443217   64930 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:25:20.443243   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.443268   64930 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:25:20.443324   64930 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443344   64930 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-799783"
	I0920 22:25:20.443654   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.443690   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.443738   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.443746   64930 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443766   64930 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-799783"
	I0920 22:25:20.443776   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0920 22:25:20.443784   64930 addons.go:243] addon metrics-server should already be in state true
	I0920 22:25:20.443814   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.444185   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.444234   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.445296   64930 out.go:177] * Verifying Kubernetes components...
	I0920 22:25:20.447251   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:25:20.460387   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36667
	I0920 22:25:20.460405   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40197
	I0920 22:25:20.460831   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.460845   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.461346   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.461369   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.461486   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.461511   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.461774   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.461877   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.462078   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.462361   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.462402   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.463069   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0920 22:25:20.463410   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.463880   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.463900   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.465938   64930 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-799783"
	W0920 22:25:20.465969   64930 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:25:20.465998   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.466384   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.466422   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.467044   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.467706   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.467742   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.482478   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I0920 22:25:20.483079   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.483326   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0920 22:25:20.483597   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.483621   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.483758   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.483921   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.484249   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.484269   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.484562   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.484633   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.484683   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.484899   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.487182   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.489324   64930 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:25:20.489534   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38485
	I0920 22:25:20.489908   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.490596   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.490619   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.490892   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.491065   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.491195   64930 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:25:20.491216   64930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:25:20.491235   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.493798   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.494931   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.495325   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.495346   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.495482   64930 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:25:20.495649   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.495816   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.495967   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.496091   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.496876   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:25:20.496899   64930 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:25:20.496919   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.500143   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.500691   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.500715   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.500900   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.501086   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.501243   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.501378   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.502515   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34479
	I0920 22:25:20.502834   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.503415   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.503435   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.503858   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.504115   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.506006   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.506217   64930 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:25:20.506234   64930 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:25:20.506253   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.509565   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.510128   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.510175   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.510373   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.510558   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.510702   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.510837   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.644780   64930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:25:20.662529   64930 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-799783" to be "Ready" ...
	I0920 22:25:20.717900   64930 node_ready.go:49] node "default-k8s-diff-port-799783" has status "Ready":"True"
	I0920 22:25:20.717930   64930 node_ready.go:38] duration metric: took 55.371957ms for node "default-k8s-diff-port-799783" to be "Ready" ...
	I0920 22:25:20.717959   64930 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:20.751849   64930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:20.793517   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:25:20.827184   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:25:20.827210   64930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:25:20.830914   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:25:20.882548   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:25:20.882577   64930 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:25:20.942208   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:25:20.942235   64930 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:25:20.998935   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:25:21.567298   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567313   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567333   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567338   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567602   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.567648   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.567677   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.567694   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567705   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567921   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.567947   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.567952   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569272   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.569294   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569344   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.569361   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.569606   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.569641   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569641   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.589899   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.589919   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.590252   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.590267   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.590273   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.842284   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.842313   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.842673   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.842674   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.842703   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.842716   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.842726   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.843052   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.843086   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.843100   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.843578   64930 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-799783"
	I0920 22:25:21.847202   64930 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 22:25:18.778824   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:20.779763   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:23.279489   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:21.849188   64930 addons.go:510] duration metric: took 1.406076182s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 22:25:22.761019   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:23.758480   64930 pod_ready.go:93] pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:23.758502   64930 pod_ready.go:82] duration metric: took 3.006618308s for pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:23.758514   64930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:25.776826   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:27.777926   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:25.766257   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:28.265538   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:29.784175   64930 pod_ready.go:93] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.784202   64930 pod_ready.go:82] duration metric: took 6.02568215s for pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.784213   64930 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.790351   64930 pod_ready.go:93] pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.790375   64930 pod_ready.go:82] duration metric: took 6.155652ms for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.790383   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.796400   64930 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.796423   64930 pod_ready.go:82] duration metric: took 6.034163ms for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.796432   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.801470   64930 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.801495   64930 pod_ready.go:82] duration metric: took 5.056014ms for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.801504   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h7j48" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.806664   64930 pod_ready.go:93] pod "kube-proxy-h7j48" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.806683   64930 pod_ready.go:82] duration metric: took 5.17339ms for pod "kube-proxy-h7j48" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.806691   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:30.163945   64930 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:30.163973   64930 pod_ready.go:82] duration metric: took 357.276064ms for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:30.163981   64930 pod_ready.go:39] duration metric: took 9.446007204s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:30.163995   64930 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:25:30.164067   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:25:30.180030   64930 api_server.go:72] duration metric: took 9.736951732s to wait for apiserver process to appear ...
	I0920 22:25:30.180063   64930 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:25:30.180084   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:25:30.184582   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 200:
	ok
	I0920 22:25:30.185462   64930 api_server.go:141] control plane version: v1.31.1
	I0920 22:25:30.185482   64930 api_server.go:131] duration metric: took 5.41308ms to wait for apiserver health ...
	I0920 22:25:30.185490   64930 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:25:30.254069   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:25:30.255250   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:30.255483   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:30.366383   64930 system_pods.go:59] 9 kube-system pods found
	I0920 22:25:30.366415   64930 system_pods.go:61] "coredns-7c65d6cfc9-dpwmm" [6bd8b8dc-b176-4d2a-affc-63e02eb9fba3] Running
	I0920 22:25:30.366421   64930 system_pods.go:61] "coredns-7c65d6cfc9-lgvzf" [58852f70-36e8-4972-8a6f-b04d8fc9d9f3] Running
	I0920 22:25:30.366425   64930 system_pods.go:61] "etcd-default-k8s-diff-port-799783" [50316bfb-5f45-4685-a331-896f9ecbf691] Running
	I0920 22:25:30.366429   64930 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799783" [f5148101-179c-45e3-a7fc-b4b8a95c8b13] Running
	I0920 22:25:30.366433   64930 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799783" [c4c595bb-40bd-4ad6-b121-43f539fc9d76] Running
	I0920 22:25:30.366436   64930 system_pods.go:61] "kube-proxy-h7j48" [0fe6b074-a959-4f74-a981-e2303126c3ab] Running
	I0920 22:25:30.366439   64930 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799783" [369b18a4-6c12-403a-889d-4365264a29bf] Running
	I0920 22:25:30.366444   64930 system_pods.go:61] "metrics-server-6867b74b74-pjcvw" [0b1b708c-664e-4d6c-8da2-79941793d8d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:25:30.366448   64930 system_pods.go:61] "storage-provisioner" [6149617c-dd2c-4ac4-ad1b-80f4943d966f] Running
	I0920 22:25:30.366455   64930 system_pods.go:74] duration metric: took 180.959929ms to wait for pod list to return data ...
	I0920 22:25:30.366462   64930 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:25:30.563810   64930 default_sa.go:45] found service account: "default"
	I0920 22:25:30.563838   64930 default_sa.go:55] duration metric: took 197.369932ms for default service account to be created ...
	I0920 22:25:30.563848   64930 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:25:30.767257   64930 system_pods.go:86] 9 kube-system pods found
	I0920 22:25:30.767285   64930 system_pods.go:89] "coredns-7c65d6cfc9-dpwmm" [6bd8b8dc-b176-4d2a-affc-63e02eb9fba3] Running
	I0920 22:25:30.767291   64930 system_pods.go:89] "coredns-7c65d6cfc9-lgvzf" [58852f70-36e8-4972-8a6f-b04d8fc9d9f3] Running
	I0920 22:25:30.767295   64930 system_pods.go:89] "etcd-default-k8s-diff-port-799783" [50316bfb-5f45-4685-a331-896f9ecbf691] Running
	I0920 22:25:30.767309   64930 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-799783" [f5148101-179c-45e3-a7fc-b4b8a95c8b13] Running
	I0920 22:25:30.767313   64930 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-799783" [c4c595bb-40bd-4ad6-b121-43f539fc9d76] Running
	I0920 22:25:30.767318   64930 system_pods.go:89] "kube-proxy-h7j48" [0fe6b074-a959-4f74-a981-e2303126c3ab] Running
	I0920 22:25:30.767323   64930 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-799783" [369b18a4-6c12-403a-889d-4365264a29bf] Running
	I0920 22:25:30.767331   64930 system_pods.go:89] "metrics-server-6867b74b74-pjcvw" [0b1b708c-664e-4d6c-8da2-79941793d8d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:25:30.767337   64930 system_pods.go:89] "storage-provisioner" [6149617c-dd2c-4ac4-ad1b-80f4943d966f] Running
	I0920 22:25:30.767348   64930 system_pods.go:126] duration metric: took 203.493638ms to wait for k8s-apps to be running ...
	I0920 22:25:30.767362   64930 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:25:30.767405   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:30.784553   64930 system_svc.go:56] duration metric: took 17.183184ms WaitForService to wait for kubelet
	I0920 22:25:30.784583   64930 kubeadm.go:582] duration metric: took 10.341510649s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:25:30.784612   64930 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:25:30.963140   64930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:25:30.963166   64930 node_conditions.go:123] node cpu capacity is 2
	I0920 22:25:30.963177   64930 node_conditions.go:105] duration metric: took 178.56013ms to run NodePressure ...
	I0920 22:25:30.963187   64930 start.go:241] waiting for startup goroutines ...
	I0920 22:25:30.963194   64930 start.go:246] waiting for cluster config update ...
	I0920 22:25:30.963204   64930 start.go:255] writing updated cluster config ...
	I0920 22:25:30.963504   64930 ssh_runner.go:195] Run: rm -f paused
	I0920 22:25:31.013117   64930 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:25:31.015183   64930 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-799783" cluster and "default" namespace by default
	I0920 22:25:29.779849   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:31.772018   64076 pod_ready.go:82] duration metric: took 4m0.000779939s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" ...
	E0920 22:25:31.772048   64076 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 22:25:31.772065   64076 pod_ready.go:39] duration metric: took 4m8.986742832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:31.772089   64076 kubeadm.go:597] duration metric: took 4m16.808660952s to restartPrimaryControlPlane
	W0920 22:25:31.772139   64076 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:25:31.772162   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:25:35.255954   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:35.256195   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:45.256746   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:45.257006   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:58.118910   64076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.346718333s)
	I0920 22:25:58.118998   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:58.138485   64076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:25:58.150312   64076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:25:58.163089   64076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:25:58.163114   64076 kubeadm.go:157] found existing configuration files:
	
	I0920 22:25:58.163165   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:25:58.173792   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:25:58.173847   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:25:58.185186   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:25:58.195835   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:25:58.195893   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:25:58.206121   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:25:58.216016   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:25:58.216076   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:25:58.226242   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:25:58.235841   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:25:58.235907   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:25:58.245684   64076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:25:58.295781   64076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:25:58.295913   64076 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:25:58.411688   64076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:25:58.411802   64076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:25:58.411912   64076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:25:58.421924   64076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:25:58.424166   64076 out.go:235]   - Generating certificates and keys ...
	I0920 22:25:58.424264   64076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:25:58.424355   64076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:25:58.424474   64076 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:25:58.424562   64076 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:25:58.424667   64076 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:25:58.424735   64076 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:25:58.424804   64076 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:25:58.424874   64076 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:25:58.424955   64076 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:25:58.425072   64076 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:25:58.425145   64076 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:25:58.425227   64076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:25:58.541762   64076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:25:58.784016   64076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:25:58.942020   64076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:25:59.024750   64076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:25:59.348415   64076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:25:59.348852   64076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:25:59.351325   64076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:25:59.353421   64076 out.go:235]   - Booting up control plane ...
	I0920 22:25:59.353551   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:25:59.353681   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:25:59.354049   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:25:59.373482   64076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:25:59.383527   64076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:25:59.383614   64076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:25:59.518812   64076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:25:59.519013   64076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:26:00.520303   64076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001563044s
	I0920 22:26:00.520408   64076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:26:05.522834   64076 kubeadm.go:310] [api-check] The API server is healthy after 5.002467309s
	I0920 22:26:05.536106   64076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:26:05.552715   64076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:26:05.578398   64076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:26:05.578647   64076 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-032884 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:26:05.590816   64076 kubeadm.go:310] [bootstrap-token] Using token: 3j77z9.7tnukhkmyvtf2f59
	I0920 22:26:05.258121   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:05.258367   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:05.592300   64076 out.go:235]   - Configuring RBAC rules ...
	I0920 22:26:05.592451   64076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:26:05.600458   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:26:05.608494   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:26:05.612079   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:26:05.617101   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:26:05.627400   64076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:26:05.930323   64076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:26:06.388375   64076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:26:06.934433   64076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:26:06.934459   64076 kubeadm.go:310] 
	I0920 22:26:06.934541   64076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:26:06.934552   64076 kubeadm.go:310] 
	I0920 22:26:06.934644   64076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:26:06.934655   64076 kubeadm.go:310] 
	I0920 22:26:06.934694   64076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:26:06.934824   64076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:26:06.934916   64076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:26:06.934950   64076 kubeadm.go:310] 
	I0920 22:26:06.935050   64076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:26:06.935067   64076 kubeadm.go:310] 
	I0920 22:26:06.935125   64076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:26:06.935136   64076 kubeadm.go:310] 
	I0920 22:26:06.935222   64076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:26:06.935325   64076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:26:06.935429   64076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:26:06.935438   64076 kubeadm.go:310] 
	I0920 22:26:06.935567   64076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:26:06.935719   64076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:26:06.935734   64076 kubeadm.go:310] 
	I0920 22:26:06.935841   64076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3j77z9.7tnukhkmyvtf2f59 \
	I0920 22:26:06.935978   64076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 22:26:06.936005   64076 kubeadm.go:310] 	--control-plane 
	I0920 22:26:06.936013   64076 kubeadm.go:310] 
	I0920 22:26:06.936116   64076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:26:06.936130   64076 kubeadm.go:310] 
	I0920 22:26:06.936241   64076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3j77z9.7tnukhkmyvtf2f59 \
	I0920 22:26:06.936376   64076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 22:26:06.937407   64076 kubeadm.go:310] W0920 22:25:58.249567    2984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:26:06.937837   64076 kubeadm.go:310] W0920 22:25:58.250340    2984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:26:06.937982   64076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:26:06.938009   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:26:06.938021   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:26:06.939944   64076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:26:06.941272   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:26:06.952238   64076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:26:06.973391   64076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:26:06.973493   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:06.973530   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-032884 minikube.k8s.io/updated_at=2024_09_20T22_26_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=no-preload-032884 minikube.k8s.io/primary=true
	I0920 22:26:07.020571   64076 ops.go:34] apiserver oom_adj: -16
	I0920 22:26:07.194238   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:07.694543   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:08.194516   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:08.694927   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:09.195240   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:09.694399   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.194851   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.695301   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.816261   64076 kubeadm.go:1113] duration metric: took 3.842834289s to wait for elevateKubeSystemPrivileges
	I0920 22:26:10.816307   64076 kubeadm.go:394] duration metric: took 4m55.906657261s to StartCluster
	I0920 22:26:10.816334   64076 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:26:10.816423   64076 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:26:10.818137   64076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:26:10.818430   64076 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:26:10.818544   64076 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:26:10.818635   64076 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:26:10.818650   64076 addons.go:69] Setting storage-provisioner=true in profile "no-preload-032884"
	I0920 22:26:10.818673   64076 addons.go:234] Setting addon storage-provisioner=true in "no-preload-032884"
	I0920 22:26:10.818683   64076 addons.go:69] Setting default-storageclass=true in profile "no-preload-032884"
	W0920 22:26:10.818695   64076 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:26:10.818695   64076 addons.go:69] Setting metrics-server=true in profile "no-preload-032884"
	I0920 22:26:10.818709   64076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-032884"
	I0920 22:26:10.818731   64076 addons.go:234] Setting addon metrics-server=true in "no-preload-032884"
	I0920 22:26:10.818739   64076 host.go:66] Checking if "no-preload-032884" exists ...
	W0920 22:26:10.818744   64076 addons.go:243] addon metrics-server should already be in state true
	I0920 22:26:10.818769   64076 host.go:66] Checking if "no-preload-032884" exists ...
	I0920 22:26:10.819164   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819180   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819179   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819213   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.819222   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.819331   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.820951   64076 out.go:177] * Verifying Kubernetes components...
	I0920 22:26:10.822713   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:26:10.836801   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0920 22:26:10.837338   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.837962   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.837987   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.838418   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.839070   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.839116   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.840425   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0920 22:26:10.840549   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34443
	I0920 22:26:10.840871   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.840930   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.841366   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.841392   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.841486   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.841510   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.841775   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.841840   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.841983   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.842400   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.842439   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.845507   64076 addons.go:234] Setting addon default-storageclass=true in "no-preload-032884"
	W0920 22:26:10.845528   64076 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:26:10.845554   64076 host.go:66] Checking if "no-preload-032884" exists ...
	I0920 22:26:10.845888   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.845930   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.856667   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I0920 22:26:10.857183   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.857701   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.857717   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.858050   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.858233   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.858557   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38281
	I0920 22:26:10.858951   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.859570   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.859590   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.859931   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.860098   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.860214   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.861837   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.862616   64076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:26:10.863540   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0920 22:26:10.863601   64076 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:26:10.863918   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.864542   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.864562   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.864714   64076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:26:10.864732   64076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:26:10.864750   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.864852   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.865319   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.865348   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.865572   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:26:10.865590   64076 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:26:10.865611   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.868063   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868471   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.868493   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868500   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868687   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.868968   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.868990   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.869012   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.869160   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.869216   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.869298   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:10.869356   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.869480   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.869584   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:10.908239   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0920 22:26:10.910305   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.910864   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.910889   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.911286   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.911498   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.913134   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.913353   64076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:26:10.913369   64076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:26:10.913385   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.916151   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.916720   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.916751   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.916974   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.917122   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.917235   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.917322   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:11.062110   64076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:26:11.091616   64076 node_ready.go:35] waiting up to 6m0s for node "no-preload-032884" to be "Ready" ...
	I0920 22:26:11.109422   64076 node_ready.go:49] node "no-preload-032884" has status "Ready":"True"
	I0920 22:26:11.109446   64076 node_ready.go:38] duration metric: took 17.79737ms for node "no-preload-032884" to be "Ready" ...
	I0920 22:26:11.109455   64076 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:26:11.117996   64076 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:11.162435   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:26:11.251093   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:26:11.251120   64076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:26:11.267736   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:26:11.300802   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:26:11.300828   64076 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:26:11.417756   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:26:11.417784   64076 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:26:11.467144   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:26:11.490750   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.490776   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.491084   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.491093   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:11.491103   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.491127   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.491142   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.492893   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:11.492899   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.492913   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.526730   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.526758   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.527068   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.527113   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.527142   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.029394   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.029419   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.029728   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.029734   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.029753   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.029780   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.029788   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.030049   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.030060   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.725440   64076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.258247375s)
	I0920 22:26:12.725497   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.725509   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.725835   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.725896   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.725909   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.725929   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.725937   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.726239   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.726256   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.726268   64076 addons.go:475] Verifying addon metrics-server=true in "no-preload-032884"
	I0920 22:26:12.728264   64076 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 22:26:12.729421   64076 addons.go:510] duration metric: took 1.910882335s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 22:26:13.131514   64076 pod_ready.go:103] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:15.624357   64076 pod_ready.go:103] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:16.624382   64076 pod_ready.go:93] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:16.624410   64076 pod_ready.go:82] duration metric: took 5.506388589s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:16.624419   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:18.631927   64076 pod_ready.go:103] pod "kube-apiserver-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:19.132302   64076 pod_ready.go:93] pod "kube-apiserver-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.132326   64076 pod_ready.go:82] duration metric: took 2.507900074s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.132336   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.137816   64076 pod_ready.go:93] pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.137841   64076 pod_ready.go:82] duration metric: took 5.497911ms for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.137853   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fdc7z" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.143195   64076 pod_ready.go:93] pod "kube-proxy-fdc7z" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.143225   64076 pod_ready.go:82] duration metric: took 5.363183ms for pod "kube-proxy-fdc7z" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.143236   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.148171   64076 pod_ready.go:93] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.148193   64076 pod_ready.go:82] duration metric: took 4.948914ms for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.148200   64076 pod_ready.go:39] duration metric: took 8.038734304s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:26:19.148214   64076 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:26:19.148259   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:26:19.167764   64076 api_server.go:72] duration metric: took 8.349299959s to wait for apiserver process to appear ...
	I0920 22:26:19.167789   64076 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:26:19.167809   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:26:19.173104   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 200:
	ok
	I0920 22:26:19.174150   64076 api_server.go:141] control plane version: v1.31.1
	I0920 22:26:19.174171   64076 api_server.go:131] duration metric: took 6.376066ms to wait for apiserver health ...
	I0920 22:26:19.174178   64076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:26:19.181050   64076 system_pods.go:59] 9 kube-system pods found
	I0920 22:26:19.181083   64076 system_pods.go:61] "coredns-7c65d6cfc9-fbxtz" [30e7b041-7437-4dec-bea2-4512c1c233d8] Running
	I0920 22:26:19.181091   64076 system_pods.go:61] "coredns-7c65d6cfc9-jsx2x" [6b4bdbac-ba8e-434b-80b7-6ade0c7cc363] Running
	I0920 22:26:19.181097   64076 system_pods.go:61] "etcd-no-preload-032884" [dd023e30-352b-4578-85b8-27fed6b64110] Running
	I0920 22:26:19.181102   64076 system_pods.go:61] "kube-apiserver-no-preload-032884" [5a644708-3886-4534-b936-b4931e804a04] Running
	I0920 22:26:19.181106   64076 system_pods.go:61] "kube-controller-manager-no-preload-032884" [6c871a06-842c-4588-a16f-2ceb8eef6565] Running
	I0920 22:26:19.181113   64076 system_pods.go:61] "kube-proxy-fdc7z" [7a28610e-aa20-4aed-a9a4-1e125f03445b] Running
	I0920 22:26:19.181117   64076 system_pods.go:61] "kube-scheduler-no-preload-032884" [6d878841-99c2-4b0a-95fa-01fbc185e78c] Running
	I0920 22:26:19.181126   64076 system_pods.go:61] "metrics-server-6867b74b74-xvbg9" [33c2a642-1336-4e1c-98cf-a1941710c39f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:26:19.181131   64076 system_pods.go:61] "storage-provisioner" [18366542-5571-4ff9-885b-f3aaa3106905] Running
	I0920 22:26:19.181143   64076 system_pods.go:74] duration metric: took 6.958801ms to wait for pod list to return data ...
	I0920 22:26:19.181152   64076 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:26:19.185152   64076 default_sa.go:45] found service account: "default"
	I0920 22:26:19.185178   64076 default_sa.go:55] duration metric: took 4.016584ms for default service account to be created ...
	I0920 22:26:19.185188   64076 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:26:19.333687   64076 system_pods.go:86] 9 kube-system pods found
	I0920 22:26:19.333721   64076 system_pods.go:89] "coredns-7c65d6cfc9-fbxtz" [30e7b041-7437-4dec-bea2-4512c1c233d8] Running
	I0920 22:26:19.333729   64076 system_pods.go:89] "coredns-7c65d6cfc9-jsx2x" [6b4bdbac-ba8e-434b-80b7-6ade0c7cc363] Running
	I0920 22:26:19.333734   64076 system_pods.go:89] "etcd-no-preload-032884" [dd023e30-352b-4578-85b8-27fed6b64110] Running
	I0920 22:26:19.333740   64076 system_pods.go:89] "kube-apiserver-no-preload-032884" [5a644708-3886-4534-b936-b4931e804a04] Running
	I0920 22:26:19.333748   64076 system_pods.go:89] "kube-controller-manager-no-preload-032884" [6c871a06-842c-4588-a16f-2ceb8eef6565] Running
	I0920 22:26:19.333753   64076 system_pods.go:89] "kube-proxy-fdc7z" [7a28610e-aa20-4aed-a9a4-1e125f03445b] Running
	I0920 22:26:19.333759   64076 system_pods.go:89] "kube-scheduler-no-preload-032884" [6d878841-99c2-4b0a-95fa-01fbc185e78c] Running
	I0920 22:26:19.333768   64076 system_pods.go:89] "metrics-server-6867b74b74-xvbg9" [33c2a642-1336-4e1c-98cf-a1941710c39f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:26:19.333778   64076 system_pods.go:89] "storage-provisioner" [18366542-5571-4ff9-885b-f3aaa3106905] Running
	I0920 22:26:19.333792   64076 system_pods.go:126] duration metric: took 148.596279ms to wait for k8s-apps to be running ...
	I0920 22:26:19.333804   64076 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:26:19.333859   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:26:19.352068   64076 system_svc.go:56] duration metric: took 18.255575ms WaitForService to wait for kubelet
	I0920 22:26:19.352101   64076 kubeadm.go:582] duration metric: took 8.533641688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:26:19.352120   64076 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:26:19.529468   64076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:26:19.529499   64076 node_conditions.go:123] node cpu capacity is 2
	I0920 22:26:19.529510   64076 node_conditions.go:105] duration metric: took 177.384986ms to run NodePressure ...
	I0920 22:26:19.529521   64076 start.go:241] waiting for startup goroutines ...
	I0920 22:26:19.529527   64076 start.go:246] waiting for cluster config update ...
	I0920 22:26:19.529536   64076 start.go:255] writing updated cluster config ...
	I0920 22:26:19.529794   64076 ssh_runner.go:195] Run: rm -f paused
	I0920 22:26:19.578797   64076 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:26:19.581150   64076 out.go:177] * Done! kubectl is now configured to use "no-preload-032884" cluster and "default" namespace by default
	I0920 22:26:45.260713   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:45.260972   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:45.260987   65164 kubeadm.go:310] 
	I0920 22:26:45.261036   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:26:45.261079   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:26:45.261089   65164 kubeadm.go:310] 
	I0920 22:26:45.261139   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:26:45.261186   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:26:45.261335   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:26:45.261345   65164 kubeadm.go:310] 
	I0920 22:26:45.261491   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:26:45.261525   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:26:45.261554   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:26:45.261560   65164 kubeadm.go:310] 
	I0920 22:26:45.261648   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:26:45.261717   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:26:45.261723   65164 kubeadm.go:310] 
	I0920 22:26:45.261812   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:26:45.261917   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:26:45.262021   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:26:45.262113   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:26:45.262126   65164 kubeadm.go:310] 
	I0920 22:26:45.263170   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:26:45.263288   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:26:45.263381   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 22:26:45.263497   65164 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 22:26:45.263544   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:26:50.654066   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.390496163s)
	I0920 22:26:50.654143   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:26:50.670366   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:26:50.682320   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:26:50.682340   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:26:50.682395   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:26:50.692427   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:26:50.692501   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:26:50.703008   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:26:50.713027   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:26:50.713090   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:26:50.724408   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.733746   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:26:50.733805   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.744669   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:26:50.753996   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:26:50.754061   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:26:50.764484   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:26:50.836541   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:26:50.836606   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:26:50.979844   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:26:50.979987   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:26:50.980117   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:26:51.172382   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:26:51.174186   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:26:51.174313   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:26:51.174410   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:26:51.174550   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:26:51.174640   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:26:51.174768   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:26:51.174855   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:26:51.175252   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:26:51.175637   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:26:51.176082   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:26:51.176400   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:26:51.176484   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:26:51.176562   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:26:51.391581   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:26:51.474902   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:26:51.620828   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:26:51.838880   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:26:51.856375   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:26:51.857522   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:26:51.857615   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:26:52.014694   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:26:52.016872   65164 out.go:235]   - Booting up control plane ...
	I0920 22:26:52.017002   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:26:52.019915   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:26:52.021285   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:26:52.022532   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:26:52.026230   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:27:32.029658   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:27:32.029764   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:32.030051   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:37.030416   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:37.030634   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:47.031397   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:47.031644   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:07.031668   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:07.031918   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030492   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:47.030797   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030821   65164 kubeadm.go:310] 
	I0920 22:28:47.030874   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:28:47.030912   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:28:47.030919   65164 kubeadm.go:310] 
	I0920 22:28:47.030982   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:28:47.031046   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:28:47.031204   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:28:47.031225   65164 kubeadm.go:310] 
	I0920 22:28:47.031371   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:28:47.031425   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:28:47.031462   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:28:47.031466   65164 kubeadm.go:310] 
	I0920 22:28:47.031605   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:28:47.031727   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:28:47.031739   65164 kubeadm.go:310] 
	I0920 22:28:47.031880   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:28:47.032003   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:28:47.032102   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:28:47.032202   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:28:47.032212   65164 kubeadm.go:310] 
	I0920 22:28:47.033045   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:28:47.033157   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:28:47.033247   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 22:28:47.033315   65164 kubeadm.go:394] duration metric: took 8m3.299765846s to StartCluster
	I0920 22:28:47.033360   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:28:47.033452   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:28:47.081201   65164 cri.go:89] found id: ""
	I0920 22:28:47.081225   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.081233   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:28:47.081239   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:28:47.081289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:28:47.119048   65164 cri.go:89] found id: ""
	I0920 22:28:47.119081   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.119089   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:28:47.119094   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:28:47.119140   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:28:47.164690   65164 cri.go:89] found id: ""
	I0920 22:28:47.164719   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.164730   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:28:47.164738   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:28:47.164806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:28:47.201003   65164 cri.go:89] found id: ""
	I0920 22:28:47.201035   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.201046   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:28:47.201053   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:28:47.201114   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:28:47.238478   65164 cri.go:89] found id: ""
	I0920 22:28:47.238510   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.238522   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:28:47.238530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:28:47.238579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:28:47.274372   65164 cri.go:89] found id: ""
	I0920 22:28:47.274406   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.274415   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:28:47.274422   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:28:47.274487   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:28:47.310155   65164 cri.go:89] found id: ""
	I0920 22:28:47.310181   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.310191   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:28:47.310197   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:28:47.310243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:28:47.350845   65164 cri.go:89] found id: ""
	I0920 22:28:47.350872   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.350882   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:28:47.350894   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:28:47.350908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:28:47.403879   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:28:47.403915   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:28:47.418989   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:28:47.419021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:28:47.511632   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:28:47.511697   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:28:47.511718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:28:47.635613   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:28:47.635653   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0920 22:28:47.677326   65164 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 22:28:47.677383   65164 out.go:270] * 
	W0920 22:28:47.677436   65164 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.677449   65164 out.go:270] * 
	W0920 22:28:47.678365   65164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 22:28:47.681590   65164 out.go:201] 
	W0920 22:28:47.682658   65164 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.682696   65164 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 22:28:47.682719   65164 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 22:28:47.684042   65164 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.102582244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871873102551925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e56ca159-e95c-4b57-a56a-0edbdff23b40 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.103247585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e0978c6-345b-46b8-9523-731265e5c677 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.103330936Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e0978c6-345b-46b8-9523-731265e5c677 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.103467401Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=3e0978c6-345b-46b8-9523-731265e5c677 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.139471411Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=92362252-5d14-422f-9749-a76b22dd99cf name=/runtime.v1.RuntimeService/Version
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.139565851Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=92362252-5d14-422f-9749-a76b22dd99cf name=/runtime.v1.RuntimeService/Version
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.140497370Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=da5cce5c-e872-4096-904d-25d21cf7a9ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.140966348Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871873140935766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=da5cce5c-e872-4096-904d-25d21cf7a9ef name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.141554140Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f365b29b-31a7-452a-b47f-dae9ab6107e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.141609111Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f365b29b-31a7-452a-b47f-dae9ab6107e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.141642744Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f365b29b-31a7-452a-b47f-dae9ab6107e0 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.174692580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=926af998-8f8a-4883-a612-5cc990011b57 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.174786129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=926af998-8f8a-4883-a612-5cc990011b57 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.176133236Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f10a393a-4bc8-4d03-9733-f32c9cc57525 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.176592018Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871873176566893,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f10a393a-4bc8-4d03-9733-f32c9cc57525 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.177156102Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0bdf143f-4799-434d-960d-f6abdce4c75c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.177210653Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0bdf143f-4799-434d-960d-f6abdce4c75c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.177245247Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0bdf143f-4799-434d-960d-f6abdce4c75c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.210716642Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c47bb777-57c3-4192-b670-40d0edc93c29 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.210797118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c47bb777-57c3-4192-b670-40d0edc93c29 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.211980647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df9a8a06-41a3-4515-ae15-1f89436c65ff name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.212457662Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871873212352802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df9a8a06-41a3-4515-ae15-1f89436c65ff name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.213094281Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1a372e28-c74e-4406-8719-c1679d5a136f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.213146426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1a372e28-c74e-4406-8719-c1679d5a136f name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:37:53 old-k8s-version-593412 crio[635]: time="2024-09-20 22:37:53.213183870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1a372e28-c74e-4406-8719-c1679d5a136f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep20 22:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053738] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040959] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.115225] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.678902] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.608738] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.088274] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.062764] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064033] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.206014] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.138069] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.284835] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +6.732414] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.070990] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.334399] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.408297] kauditd_printk_skb: 46 callbacks suppressed
	[Sep20 22:24] systemd-fstab-generator[5074]: Ignoring "noauto" option for root device
	[Sep20 22:26] systemd-fstab-generator[5359]: Ignoring "noauto" option for root device
	[  +0.072028] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:37:53 up 17 min,  0 users,  load average: 0.01, 0.04, 0.05
	Linux old-k8s-version-593412 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000ccd260, 0x0, 0x0)
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008c41c0)
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]: goroutine 135 [select]:
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000c9e9b0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000ccd4a0, 0x0, 0x0)
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000674700)
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 20 22:37:48 old-k8s-version-593412 kubelet[6528]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Sep 20 22:37:48 old-k8s-version-593412 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 20 22:37:48 old-k8s-version-593412 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 20 22:37:48 old-k8s-version-593412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Sep 20 22:37:48 old-k8s-version-593412 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 20 22:37:48 old-k8s-version-593412 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 20 22:37:49 old-k8s-version-593412 kubelet[6537]: I0920 22:37:49.078900    6537 server.go:416] Version: v1.20.0
	Sep 20 22:37:49 old-k8s-version-593412 kubelet[6537]: I0920 22:37:49.079307    6537 server.go:837] Client rotation is on, will bootstrap in background
	Sep 20 22:37:49 old-k8s-version-593412 kubelet[6537]: I0920 22:37:49.082452    6537 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 20 22:37:49 old-k8s-version-593412 kubelet[6537]: W0920 22:37:49.083746    6537 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 20 22:37:49 old-k8s-version-593412 kubelet[6537]: I0920 22:37:49.084052    6537 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-593412 -n old-k8s-version-593412
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-593412 -n old-k8s-version-593412: exit status 2 (264.527336ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-593412" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (461.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-562750 -n embed-certs-562750
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-20 22:41:21.413836104 +0000 UTC m=+6821.544971078
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-562750 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-562750 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.246µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-562750 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-562750 -n embed-certs-562750
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-562750 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-562750 logs -n 25: (1.284429193s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-032884             | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-440582 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | disable-driver-mounts-440582                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:14 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-562750            | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799783  | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-032884                  | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-593412        | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-562750                 | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC | 20 Sep 24 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799783       | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:25 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-593412             | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC | 20 Sep 24 22:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:40 UTC | 20 Sep 24 22:40 UTC |
	| start   | -p newest-cni-402676 --memory=2200 --alsologtostderr   | newest-cni-402676            | jenkins | v1.34.0 | 20 Sep 24 22:40 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:40 UTC | 20 Sep 24 22:40 UTC |
	| start   | -p auto-833441 --memory=3072                           | auto-833441                  | jenkins | v1.34.0 | 20 Sep 24 22:40 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 22:40:59
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 22:40:59.967889   72245 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:40:59.968010   72245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:40:59.968019   72245 out.go:358] Setting ErrFile to fd 2...
	I0920 22:40:59.968023   72245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:40:59.968183   72245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:40:59.968732   72245 out.go:352] Setting JSON to false
	I0920 22:40:59.969589   72245 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8608,"bootTime":1726863452,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:40:59.969691   72245 start.go:139] virtualization: kvm guest
	I0920 22:40:59.972089   72245 out.go:177] * [auto-833441] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:40:59.973361   72245 notify.go:220] Checking for updates...
	I0920 22:40:59.973368   72245 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:40:59.974744   72245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:40:59.976024   72245 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:40:59.977122   72245 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:40:59.978241   72245 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:40:59.979822   72245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:40:59.981572   72245 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:40:59.981663   72245 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:40:59.981749   72245 config.go:182] Loaded profile config "newest-cni-402676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:40:59.981827   72245 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:41:00.023077   72245 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 22:41:00.024164   72245 start.go:297] selected driver: kvm2
	I0920 22:41:00.024183   72245 start.go:901] validating driver "kvm2" against <nil>
	I0920 22:41:00.024199   72245 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:41:00.025060   72245 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:41:00.025169   72245 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 22:41:00.041050   72245 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 22:41:00.041102   72245 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 22:41:00.041385   72245 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:41:00.041415   72245 cni.go:84] Creating CNI manager for ""
	I0920 22:41:00.041468   72245 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:41:00.041482   72245 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 22:41:00.041539   72245 start.go:340] cluster config:
	{Name:auto-833441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-833441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:41:00.041692   72245 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:41:00.043996   72245 out.go:177] * Starting "auto-833441" primary control-plane node in "auto-833441" cluster
	I0920 22:40:59.647956   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:59.648496   71805 main.go:141] libmachine: (newest-cni-402676) Found IP for machine: 192.168.61.18
	I0920 22:40:59.648519   71805 main.go:141] libmachine: (newest-cni-402676) Reserving static IP address...
	I0920 22:40:59.648532   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has current primary IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:59.648951   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find host DHCP lease matching {name: "newest-cni-402676", mac: "52:54:00:45:ed:7a", ip: "192.168.61.18"} in network mk-newest-cni-402676
	I0920 22:40:59.730990   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Getting to WaitForSSH function...
	I0920 22:40:59.731021   71805 main.go:141] libmachine: (newest-cni-402676) Reserved static IP address: 192.168.61.18
	I0920 22:40:59.731047   71805 main.go:141] libmachine: (newest-cni-402676) Waiting for SSH to be available...
	I0920 22:40:59.733467   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:59.734007   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:minikube Clientid:01:52:54:00:45:ed:7a}
	I0920 22:40:59.734030   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:59.734133   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Using SSH client type: external
	I0920 22:40:59.734169   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676/id_rsa (-rw-------)
	I0920 22:40:59.734197   71805 main.go:141] libmachine: (newest-cni-402676) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.18 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:40:59.734209   71805 main.go:141] libmachine: (newest-cni-402676) DBG | About to run SSH command:
	I0920 22:40:59.734222   71805 main.go:141] libmachine: (newest-cni-402676) DBG | exit 0
	I0920 22:40:59.863330   71805 main.go:141] libmachine: (newest-cni-402676) DBG | SSH cmd err, output: <nil>: 
	I0920 22:40:59.863594   71805 main.go:141] libmachine: (newest-cni-402676) KVM machine creation complete!
	I0920 22:40:59.863948   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetConfigRaw
	I0920 22:40:59.864562   71805 main.go:141] libmachine: (newest-cni-402676) Calling .DriverName
	I0920 22:40:59.864730   71805 main.go:141] libmachine: (newest-cni-402676) Calling .DriverName
	I0920 22:40:59.864889   71805 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0920 22:40:59.864903   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetState
	I0920 22:40:59.866210   71805 main.go:141] libmachine: Detecting operating system of created instance...
	I0920 22:40:59.866222   71805 main.go:141] libmachine: Waiting for SSH to be available...
	I0920 22:40:59.866227   71805 main.go:141] libmachine: Getting to WaitForSSH function...
	I0920 22:40:59.866234   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHHostname
	I0920 22:40:59.868896   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:59.869265   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:40:59.869296   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:59.869417   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHPort
	I0920 22:40:59.869600   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:40:59.869779   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:40:59.869898   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHUsername
	I0920 22:40:59.870066   71805 main.go:141] libmachine: Using SSH client type: native
	I0920 22:40:59.870315   71805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.18 22 <nil> <nil>}
	I0920 22:40:59.870329   71805 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0920 22:40:59.983310   71805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:40:59.983329   71805 main.go:141] libmachine: Detecting the provisioner...
	I0920 22:40:59.983337   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHHostname
	I0920 22:40:59.986609   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:59.986946   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:40:59.987008   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:59.987142   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHPort
	I0920 22:40:59.987374   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:40:59.987570   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:40:59.987748   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHUsername
	I0920 22:40:59.987912   71805 main.go:141] libmachine: Using SSH client type: native
	I0920 22:40:59.988121   71805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.18 22 <nil> <nil>}
	I0920 22:40:59.988136   71805 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0920 22:41:00.096171   71805 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0920 22:41:00.096273   71805 main.go:141] libmachine: found compatible host: buildroot
	I0920 22:41:00.096284   71805 main.go:141] libmachine: Provisioning with buildroot...
	I0920 22:41:00.096292   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetMachineName
	I0920 22:41:00.096528   71805 buildroot.go:166] provisioning hostname "newest-cni-402676"
	I0920 22:41:00.096558   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetMachineName
	I0920 22:41:00.096752   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHHostname
	I0920 22:41:00.099602   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:00.100017   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:00.100039   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:00.100277   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHPort
	I0920 22:41:00.100461   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:41:00.100599   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:41:00.100757   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHUsername
	I0920 22:41:00.100949   71805 main.go:141] libmachine: Using SSH client type: native
	I0920 22:41:00.101172   71805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.18 22 <nil> <nil>}
	I0920 22:41:00.101185   71805 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-402676 && echo "newest-cni-402676" | sudo tee /etc/hostname
	I0920 22:41:00.222604   71805 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-402676
	
	I0920 22:41:00.222632   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHHostname
	I0920 22:41:00.225504   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:00.225831   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:00.225867   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:00.226070   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHPort
	I0920 22:41:00.226244   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:41:00.226379   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:41:00.226567   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHUsername
	I0920 22:41:00.226723   71805 main.go:141] libmachine: Using SSH client type: native
	I0920 22:41:00.226886   71805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.18 22 <nil> <nil>}
	I0920 22:41:00.226901   71805 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-402676' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-402676/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-402676' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:41:00.340734   71805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:41:00.340766   71805 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:41:00.340785   71805 buildroot.go:174] setting up certificates
	I0920 22:41:00.340795   71805 provision.go:84] configureAuth start
	I0920 22:41:00.340803   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetMachineName
	I0920 22:41:00.341057   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetIP
	I0920 22:41:00.344024   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:00.344431   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:00.344467   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:00.344661   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHHostname
	I0920 22:41:00.346878   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:00.347343   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:00.347386   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:00.347539   71805 provision.go:143] copyHostCerts
	I0920 22:41:00.347596   71805 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:41:00.347616   71805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:41:00.347696   71805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:41:00.347844   71805 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:41:00.347862   71805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:41:00.347902   71805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:41:00.347974   71805 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:41:00.347983   71805 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:41:00.348021   71805 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:41:00.348084   71805 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.newest-cni-402676 san=[127.0.0.1 192.168.61.18 localhost minikube newest-cni-402676]
	I0920 22:41:00.703022   71805 provision.go:177] copyRemoteCerts
	I0920 22:41:00.703098   71805 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:41:00.703120   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHHostname
	I0920 22:41:00.705696   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:00.705981   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:00.706011   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:00.706231   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHPort
	I0920 22:41:00.706416   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:41:00.706570   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHUsername
	I0920 22:41:00.706704   71805 sshutil.go:53] new ssh client: &{IP:192.168.61.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676/id_rsa Username:docker}
	I0920 22:41:00.790133   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:41:00.815521   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 22:41:00.840752   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:41:00.865446   71805 provision.go:87] duration metric: took 524.637804ms to configureAuth
	I0920 22:41:00.865475   71805 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:41:00.865644   71805 config.go:182] Loaded profile config "newest-cni-402676": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:41:00.865713   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHHostname
	I0920 22:41:00.868306   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:00.868623   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:00.868651   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:00.868812   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHPort
	I0920 22:41:00.868981   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:41:00.869120   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:41:00.869220   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHUsername
	I0920 22:41:00.869371   71805 main.go:141] libmachine: Using SSH client type: native
	I0920 22:41:00.869566   71805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.18 22 <nil> <nil>}
	I0920 22:41:00.869587   71805 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:41:00.045241   72245 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:41:00.045278   72245 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 22:41:00.045301   72245 cache.go:56] Caching tarball of preloaded images
	I0920 22:41:00.045425   72245 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 22:41:00.045438   72245 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 22:41:00.045555   72245 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/auto-833441/config.json ...
	I0920 22:41:00.045581   72245 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/auto-833441/config.json: {Name:mke2637ccd0d7fd74faa8a07858edf0e59a34511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:41:00.045745   72245 start.go:360] acquireMachinesLock for auto-833441: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:41:01.368303   72245 start.go:364] duration metric: took 1.322521987s to acquireMachinesLock for "auto-833441"
	I0920 22:41:01.368364   72245 start.go:93] Provisioning new machine with config: &{Name:auto-833441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:auto-833441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:41:01.368475   72245 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 22:41:01.124923   71805 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:41:01.124949   71805 main.go:141] libmachine: Checking connection to Docker...
	I0920 22:41:01.124959   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetURL
	I0920 22:41:01.126342   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Using libvirt version 6000000
	I0920 22:41:01.128644   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.129002   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:01.129034   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.129197   71805 main.go:141] libmachine: Docker is up and running!
	I0920 22:41:01.129214   71805 main.go:141] libmachine: Reticulating splines...
	I0920 22:41:01.129222   71805 client.go:171] duration metric: took 25.017513928s to LocalClient.Create
	I0920 22:41:01.129250   71805 start.go:167] duration metric: took 25.017589823s to libmachine.API.Create "newest-cni-402676"
	I0920 22:41:01.129263   71805 start.go:293] postStartSetup for "newest-cni-402676" (driver="kvm2")
	I0920 22:41:01.129276   71805 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:41:01.129295   71805 main.go:141] libmachine: (newest-cni-402676) Calling .DriverName
	I0920 22:41:01.129550   71805 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:41:01.129579   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHHostname
	I0920 22:41:01.132127   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.132635   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:01.132665   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.132841   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHPort
	I0920 22:41:01.133034   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:41:01.133176   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHUsername
	I0920 22:41:01.133366   71805 sshutil.go:53] new ssh client: &{IP:192.168.61.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676/id_rsa Username:docker}
	I0920 22:41:01.217633   71805 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:41:01.221835   71805 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:41:01.221860   71805 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:41:01.221926   71805 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:41:01.222028   71805 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:41:01.222117   71805 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:41:01.231721   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:41:01.256362   71805 start.go:296] duration metric: took 127.086842ms for postStartSetup
	I0920 22:41:01.256407   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetConfigRaw
	I0920 22:41:01.257066   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetIP
	I0920 22:41:01.259739   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.260100   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:01.260123   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.260371   71805 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/config.json ...
	I0920 22:41:01.260545   71805 start.go:128] duration metric: took 25.16766985s to createHost
	I0920 22:41:01.260568   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHHostname
	I0920 22:41:01.262592   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.262985   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:01.263015   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.263133   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHPort
	I0920 22:41:01.263297   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:41:01.263419   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:41:01.263553   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHUsername
	I0920 22:41:01.263733   71805 main.go:141] libmachine: Using SSH client type: native
	I0920 22:41:01.263908   71805 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.18 22 <nil> <nil>}
	I0920 22:41:01.263918   71805 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:41:01.368130   71805 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726872061.322768102
	
	I0920 22:41:01.368157   71805 fix.go:216] guest clock: 1726872061.322768102
	I0920 22:41:01.368167   71805 fix.go:229] Guest: 2024-09-20 22:41:01.322768102 +0000 UTC Remote: 2024-09-20 22:41:01.260556267 +0000 UTC m=+25.284951646 (delta=62.211835ms)
	I0920 22:41:01.368188   71805 fix.go:200] guest clock delta is within tolerance: 62.211835ms
	I0920 22:41:01.368200   71805 start.go:83] releasing machines lock for "newest-cni-402676", held for 25.275428017s
	I0920 22:41:01.368222   71805 main.go:141] libmachine: (newest-cni-402676) Calling .DriverName
	I0920 22:41:01.368516   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetIP
	I0920 22:41:01.371370   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.371746   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:01.371774   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.372001   71805 main.go:141] libmachine: (newest-cni-402676) Calling .DriverName
	I0920 22:41:01.372970   71805 main.go:141] libmachine: (newest-cni-402676) Calling .DriverName
	I0920 22:41:01.373162   71805 main.go:141] libmachine: (newest-cni-402676) Calling .DriverName
	I0920 22:41:01.373245   71805 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:41:01.373295   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHHostname
	I0920 22:41:01.373354   71805 ssh_runner.go:195] Run: cat /version.json
	I0920 22:41:01.373383   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHHostname
	I0920 22:41:01.376407   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.376659   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.376822   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:01.376856   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.377010   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHPort
	I0920 22:41:01.377101   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:01.377116   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:01.377152   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:41:01.377275   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHPort
	I0920 22:41:01.377314   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHUsername
	I0920 22:41:01.377411   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHKeyPath
	I0920 22:41:01.377475   71805 sshutil.go:53] new ssh client: &{IP:192.168.61.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676/id_rsa Username:docker}
	I0920 22:41:01.377525   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetSSHUsername
	I0920 22:41:01.377646   71805 sshutil.go:53] new ssh client: &{IP:192.168.61.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676/id_rsa Username:docker}
	I0920 22:41:01.461060   71805 ssh_runner.go:195] Run: systemctl --version
	I0920 22:41:01.484046   71805 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:41:01.652582   71805 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:41:01.659642   71805 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:41:01.659698   71805 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:41:01.676974   71805 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:41:01.676996   71805 start.go:495] detecting cgroup driver to use...
	I0920 22:41:01.677064   71805 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:41:01.695156   71805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:41:01.710580   71805 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:41:01.710651   71805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:41:01.726035   71805 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:41:01.741322   71805 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:41:01.874491   71805 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:41:02.055163   71805 docker.go:233] disabling docker service ...
	I0920 22:41:02.055239   71805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:41:02.071152   71805 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:41:02.087789   71805 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:41:02.214479   71805 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:41:02.345884   71805 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:41:02.361679   71805 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:41:02.383250   71805 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:41:02.383311   71805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:41:02.394855   71805 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:41:02.394934   71805 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:41:02.405538   71805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:41:02.416606   71805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:41:02.428356   71805 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:41:02.440435   71805 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:41:02.451518   71805 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:41:02.469664   71805 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:41:02.480572   71805 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:41:02.492388   71805 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:41:02.492448   71805 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:41:02.508861   71805 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:41:02.519876   71805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:41:02.643367   71805 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:41:02.751428   71805 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:41:02.751509   71805 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:41:02.758441   71805 start.go:563] Will wait 60s for crictl version
	I0920 22:41:02.758511   71805 ssh_runner.go:195] Run: which crictl
	I0920 22:41:02.763706   71805 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:41:02.809182   71805 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:41:02.809262   71805 ssh_runner.go:195] Run: crio --version
	I0920 22:41:02.841829   71805 ssh_runner.go:195] Run: crio --version
	I0920 22:41:02.877431   71805 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:41:02.878784   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetIP
	I0920 22:41:02.881940   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:02.882379   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:ed:7a", ip: ""} in network mk-newest-cni-402676: {Iface:virbr3 ExpiryTime:2024-09-20 23:40:51 +0000 UTC Type:0 Mac:52:54:00:45:ed:7a Iaid: IPaddr:192.168.61.18 Prefix:24 Hostname:newest-cni-402676 Clientid:01:52:54:00:45:ed:7a}
	I0920 22:41:02.882408   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined IP address 192.168.61.18 and MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:41:02.882636   71805 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 22:41:02.887927   71805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:41:02.905500   71805 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0920 22:41:01.371993   72245 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0920 22:41:01.372187   72245 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:41:01.372240   72245 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:41:01.389033   72245 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38111
	I0920 22:41:01.389464   72245 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:41:01.390024   72245 main.go:141] libmachine: Using API Version  1
	I0920 22:41:01.390062   72245 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:41:01.390416   72245 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:41:01.390587   72245 main.go:141] libmachine: (auto-833441) Calling .GetMachineName
	I0920 22:41:01.390771   72245 main.go:141] libmachine: (auto-833441) Calling .DriverName
	I0920 22:41:01.390937   72245 start.go:159] libmachine.API.Create for "auto-833441" (driver="kvm2")
	I0920 22:41:01.390988   72245 client.go:168] LocalClient.Create starting
	I0920 22:41:01.391029   72245 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 22:41:01.391069   72245 main.go:141] libmachine: Decoding PEM data...
	I0920 22:41:01.391099   72245 main.go:141] libmachine: Parsing certificate...
	I0920 22:41:01.391167   72245 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 22:41:01.391191   72245 main.go:141] libmachine: Decoding PEM data...
	I0920 22:41:01.391211   72245 main.go:141] libmachine: Parsing certificate...
	I0920 22:41:01.391240   72245 main.go:141] libmachine: Running pre-create checks...
	I0920 22:41:01.391261   72245 main.go:141] libmachine: (auto-833441) Calling .PreCreateCheck
	I0920 22:41:01.391606   72245 main.go:141] libmachine: (auto-833441) Calling .GetConfigRaw
	I0920 22:41:01.392059   72245 main.go:141] libmachine: Creating machine...
	I0920 22:41:01.392075   72245 main.go:141] libmachine: (auto-833441) Calling .Create
	I0920 22:41:01.392237   72245 main.go:141] libmachine: (auto-833441) Creating KVM machine...
	I0920 22:41:01.393447   72245 main.go:141] libmachine: (auto-833441) DBG | found existing default KVM network
	I0920 22:41:01.394828   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:01.394661   72268 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:64:2c:33} reservation:<nil>}
	I0920 22:41:01.395718   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:01.395636   72268 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:06:9c:de} reservation:<nil>}
	I0920 22:41:01.396970   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:01.396878   72268 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:d1:b1:c8} reservation:<nil>}
	I0920 22:41:01.398101   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:01.398008   72268 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000383110}
	I0920 22:41:01.398129   72245 main.go:141] libmachine: (auto-833441) DBG | created network xml: 
	I0920 22:41:01.398146   72245 main.go:141] libmachine: (auto-833441) DBG | <network>
	I0920 22:41:01.398157   72245 main.go:141] libmachine: (auto-833441) DBG |   <name>mk-auto-833441</name>
	I0920 22:41:01.398168   72245 main.go:141] libmachine: (auto-833441) DBG |   <dns enable='no'/>
	I0920 22:41:01.398177   72245 main.go:141] libmachine: (auto-833441) DBG |   
	I0920 22:41:01.398190   72245 main.go:141] libmachine: (auto-833441) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0920 22:41:01.398201   72245 main.go:141] libmachine: (auto-833441) DBG |     <dhcp>
	I0920 22:41:01.398212   72245 main.go:141] libmachine: (auto-833441) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0920 22:41:01.398227   72245 main.go:141] libmachine: (auto-833441) DBG |     </dhcp>
	I0920 22:41:01.398238   72245 main.go:141] libmachine: (auto-833441) DBG |   </ip>
	I0920 22:41:01.398244   72245 main.go:141] libmachine: (auto-833441) DBG |   
	I0920 22:41:01.398253   72245 main.go:141] libmachine: (auto-833441) DBG | </network>
	I0920 22:41:01.398262   72245 main.go:141] libmachine: (auto-833441) DBG | 
	I0920 22:41:01.404308   72245 main.go:141] libmachine: (auto-833441) DBG | trying to create private KVM network mk-auto-833441 192.168.72.0/24...
	I0920 22:41:01.483178   72245 main.go:141] libmachine: (auto-833441) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/auto-833441 ...
	I0920 22:41:01.483222   72245 main.go:141] libmachine: (auto-833441) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 22:41:01.483249   72245 main.go:141] libmachine: (auto-833441) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 22:41:01.483267   72245 main.go:141] libmachine: (auto-833441) DBG | private KVM network mk-auto-833441 192.168.72.0/24 created
	I0920 22:41:01.483282   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:01.483052   72268 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:41:01.730602   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:01.730492   72268 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/auto-833441/id_rsa...
	I0920 22:41:01.828744   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:01.828600   72268 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/auto-833441/auto-833441.rawdisk...
	I0920 22:41:01.828780   72245 main.go:141] libmachine: (auto-833441) DBG | Writing magic tar header
	I0920 22:41:01.828796   72245 main.go:141] libmachine: (auto-833441) DBG | Writing SSH key tar header
	I0920 22:41:01.828808   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:01.828755   72268 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/auto-833441 ...
	I0920 22:41:01.828864   72245 main.go:141] libmachine: (auto-833441) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/auto-833441
	I0920 22:41:01.828891   72245 main.go:141] libmachine: (auto-833441) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/auto-833441 (perms=drwx------)
	I0920 22:41:01.828912   72245 main.go:141] libmachine: (auto-833441) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 22:41:01.829000   72245 main.go:141] libmachine: (auto-833441) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 22:41:01.829020   72245 main.go:141] libmachine: (auto-833441) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:41:01.829034   72245 main.go:141] libmachine: (auto-833441) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 22:41:01.829066   72245 main.go:141] libmachine: (auto-833441) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 22:41:01.829087   72245 main.go:141] libmachine: (auto-833441) DBG | Checking permissions on dir: /home/jenkins
	I0920 22:41:01.829096   72245 main.go:141] libmachine: (auto-833441) DBG | Checking permissions on dir: /home
	I0920 22:41:01.829107   72245 main.go:141] libmachine: (auto-833441) DBG | Skipping /home - not owner
	I0920 22:41:01.829120   72245 main.go:141] libmachine: (auto-833441) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 22:41:01.829133   72245 main.go:141] libmachine: (auto-833441) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 22:41:01.829140   72245 main.go:141] libmachine: (auto-833441) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 22:41:01.829152   72245 main.go:141] libmachine: (auto-833441) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 22:41:01.829158   72245 main.go:141] libmachine: (auto-833441) Creating domain...
	I0920 22:41:01.830346   72245 main.go:141] libmachine: (auto-833441) define libvirt domain using xml: 
	I0920 22:41:01.830363   72245 main.go:141] libmachine: (auto-833441) <domain type='kvm'>
	I0920 22:41:01.830374   72245 main.go:141] libmachine: (auto-833441)   <name>auto-833441</name>
	I0920 22:41:01.830382   72245 main.go:141] libmachine: (auto-833441)   <memory unit='MiB'>3072</memory>
	I0920 22:41:01.830391   72245 main.go:141] libmachine: (auto-833441)   <vcpu>2</vcpu>
	I0920 22:41:01.830398   72245 main.go:141] libmachine: (auto-833441)   <features>
	I0920 22:41:01.830407   72245 main.go:141] libmachine: (auto-833441)     <acpi/>
	I0920 22:41:01.830417   72245 main.go:141] libmachine: (auto-833441)     <apic/>
	I0920 22:41:01.830436   72245 main.go:141] libmachine: (auto-833441)     <pae/>
	I0920 22:41:01.830448   72245 main.go:141] libmachine: (auto-833441)     
	I0920 22:41:01.830453   72245 main.go:141] libmachine: (auto-833441)   </features>
	I0920 22:41:01.830465   72245 main.go:141] libmachine: (auto-833441)   <cpu mode='host-passthrough'>
	I0920 22:41:01.830474   72245 main.go:141] libmachine: (auto-833441)   
	I0920 22:41:01.830477   72245 main.go:141] libmachine: (auto-833441)   </cpu>
	I0920 22:41:01.830483   72245 main.go:141] libmachine: (auto-833441)   <os>
	I0920 22:41:01.830489   72245 main.go:141] libmachine: (auto-833441)     <type>hvm</type>
	I0920 22:41:01.830497   72245 main.go:141] libmachine: (auto-833441)     <boot dev='cdrom'/>
	I0920 22:41:01.830507   72245 main.go:141] libmachine: (auto-833441)     <boot dev='hd'/>
	I0920 22:41:01.830516   72245 main.go:141] libmachine: (auto-833441)     <bootmenu enable='no'/>
	I0920 22:41:01.830524   72245 main.go:141] libmachine: (auto-833441)   </os>
	I0920 22:41:01.830530   72245 main.go:141] libmachine: (auto-833441)   <devices>
	I0920 22:41:01.830537   72245 main.go:141] libmachine: (auto-833441)     <disk type='file' device='cdrom'>
	I0920 22:41:01.830548   72245 main.go:141] libmachine: (auto-833441)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/auto-833441/boot2docker.iso'/>
	I0920 22:41:01.830555   72245 main.go:141] libmachine: (auto-833441)       <target dev='hdc' bus='scsi'/>
	I0920 22:41:01.830560   72245 main.go:141] libmachine: (auto-833441)       <readonly/>
	I0920 22:41:01.830564   72245 main.go:141] libmachine: (auto-833441)     </disk>
	I0920 22:41:01.830571   72245 main.go:141] libmachine: (auto-833441)     <disk type='file' device='disk'>
	I0920 22:41:01.830583   72245 main.go:141] libmachine: (auto-833441)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 22:41:01.830601   72245 main.go:141] libmachine: (auto-833441)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/auto-833441/auto-833441.rawdisk'/>
	I0920 22:41:01.830615   72245 main.go:141] libmachine: (auto-833441)       <target dev='hda' bus='virtio'/>
	I0920 22:41:01.830626   72245 main.go:141] libmachine: (auto-833441)     </disk>
	I0920 22:41:01.830632   72245 main.go:141] libmachine: (auto-833441)     <interface type='network'>
	I0920 22:41:01.830640   72245 main.go:141] libmachine: (auto-833441)       <source network='mk-auto-833441'/>
	I0920 22:41:01.830645   72245 main.go:141] libmachine: (auto-833441)       <model type='virtio'/>
	I0920 22:41:01.830649   72245 main.go:141] libmachine: (auto-833441)     </interface>
	I0920 22:41:01.830656   72245 main.go:141] libmachine: (auto-833441)     <interface type='network'>
	I0920 22:41:01.830664   72245 main.go:141] libmachine: (auto-833441)       <source network='default'/>
	I0920 22:41:01.830674   72245 main.go:141] libmachine: (auto-833441)       <model type='virtio'/>
	I0920 22:41:01.830707   72245 main.go:141] libmachine: (auto-833441)     </interface>
	I0920 22:41:01.830730   72245 main.go:141] libmachine: (auto-833441)     <serial type='pty'>
	I0920 22:41:01.830744   72245 main.go:141] libmachine: (auto-833441)       <target port='0'/>
	I0920 22:41:01.830754   72245 main.go:141] libmachine: (auto-833441)     </serial>
	I0920 22:41:01.830765   72245 main.go:141] libmachine: (auto-833441)     <console type='pty'>
	I0920 22:41:01.830776   72245 main.go:141] libmachine: (auto-833441)       <target type='serial' port='0'/>
	I0920 22:41:01.830785   72245 main.go:141] libmachine: (auto-833441)     </console>
	I0920 22:41:01.830795   72245 main.go:141] libmachine: (auto-833441)     <rng model='virtio'>
	I0920 22:41:01.830822   72245 main.go:141] libmachine: (auto-833441)       <backend model='random'>/dev/random</backend>
	I0920 22:41:01.830844   72245 main.go:141] libmachine: (auto-833441)     </rng>
	I0920 22:41:01.830870   72245 main.go:141] libmachine: (auto-833441)     
	I0920 22:41:01.830879   72245 main.go:141] libmachine: (auto-833441)     
	I0920 22:41:01.830891   72245 main.go:141] libmachine: (auto-833441)   </devices>
	I0920 22:41:01.830902   72245 main.go:141] libmachine: (auto-833441) </domain>
	I0920 22:41:01.830917   72245 main.go:141] libmachine: (auto-833441) 
	I0920 22:41:01.835705   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:38:2b:dd in network default
	I0920 22:41:01.836483   72245 main.go:141] libmachine: (auto-833441) Ensuring networks are active...
	I0920 22:41:01.836522   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:01.837260   72245 main.go:141] libmachine: (auto-833441) Ensuring network default is active
	I0920 22:41:01.837575   72245 main.go:141] libmachine: (auto-833441) Ensuring network mk-auto-833441 is active
	I0920 22:41:01.838149   72245 main.go:141] libmachine: (auto-833441) Getting domain xml...
	I0920 22:41:01.838977   72245 main.go:141] libmachine: (auto-833441) Creating domain...
	I0920 22:41:03.177201   72245 main.go:141] libmachine: (auto-833441) Waiting to get IP...
	I0920 22:41:03.178249   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:03.178657   72245 main.go:141] libmachine: (auto-833441) DBG | unable to find current IP address of domain auto-833441 in network mk-auto-833441
	I0920 22:41:03.178685   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:03.178637   72268 retry.go:31] will retry after 188.512201ms: waiting for machine to come up
	I0920 22:41:03.369086   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:03.369862   72245 main.go:141] libmachine: (auto-833441) DBG | unable to find current IP address of domain auto-833441 in network mk-auto-833441
	I0920 22:41:03.369901   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:03.369785   72268 retry.go:31] will retry after 321.070806ms: waiting for machine to come up
	I0920 22:41:03.692221   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:03.692802   72245 main.go:141] libmachine: (auto-833441) DBG | unable to find current IP address of domain auto-833441 in network mk-auto-833441
	I0920 22:41:03.692831   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:03.692779   72268 retry.go:31] will retry after 388.403058ms: waiting for machine to come up
	I0920 22:41:04.083444   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:04.084255   72245 main.go:141] libmachine: (auto-833441) DBG | unable to find current IP address of domain auto-833441 in network mk-auto-833441
	I0920 22:41:04.084282   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:04.084174   72268 retry.go:31] will retry after 534.214562ms: waiting for machine to come up
	I0920 22:41:04.619752   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:04.620330   72245 main.go:141] libmachine: (auto-833441) DBG | unable to find current IP address of domain auto-833441 in network mk-auto-833441
	I0920 22:41:04.620358   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:04.620294   72268 retry.go:31] will retry after 657.387752ms: waiting for machine to come up
	I0920 22:41:02.906820   71805 kubeadm.go:883] updating cluster {Name:newest-cni-402676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-402676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:41:02.906932   71805 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:41:02.907039   71805 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:41:02.943753   71805 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:41:02.943819   71805 ssh_runner.go:195] Run: which lz4
	I0920 22:41:02.948347   71805 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:41:02.952497   71805 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:41:02.952528   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 22:41:04.480223   71805 crio.go:462] duration metric: took 1.531927392s to copy over tarball
	I0920 22:41:04.480308   71805 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:41:05.279337   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:05.279778   72245 main.go:141] libmachine: (auto-833441) DBG | unable to find current IP address of domain auto-833441 in network mk-auto-833441
	I0920 22:41:05.279815   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:05.279726   72268 retry.go:31] will retry after 831.327625ms: waiting for machine to come up
	I0920 22:41:06.113173   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:06.113693   72245 main.go:141] libmachine: (auto-833441) DBG | unable to find current IP address of domain auto-833441 in network mk-auto-833441
	I0920 22:41:06.113722   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:06.113654   72268 retry.go:31] will retry after 1.100527256s: waiting for machine to come up
	I0920 22:41:07.215359   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:07.215735   72245 main.go:141] libmachine: (auto-833441) DBG | unable to find current IP address of domain auto-833441 in network mk-auto-833441
	I0920 22:41:07.215768   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:07.215727   72268 retry.go:31] will retry after 1.099240136s: waiting for machine to come up
	I0920 22:41:08.316053   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:08.316631   72245 main.go:141] libmachine: (auto-833441) DBG | unable to find current IP address of domain auto-833441 in network mk-auto-833441
	I0920 22:41:08.316657   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:08.316592   72268 retry.go:31] will retry after 1.471062778s: waiting for machine to come up
	I0920 22:41:09.789278   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:09.789764   72245 main.go:141] libmachine: (auto-833441) DBG | unable to find current IP address of domain auto-833441 in network mk-auto-833441
	I0920 22:41:09.789806   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:09.789718   72268 retry.go:31] will retry after 2.110353131s: waiting for machine to come up
	I0920 22:41:06.601737   71805 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.121406948s)
	I0920 22:41:06.601772   71805 crio.go:469] duration metric: took 2.121506284s to extract the tarball
	I0920 22:41:06.601782   71805 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:41:06.643858   71805 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:41:06.700086   71805 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:41:06.700110   71805 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:41:06.700118   71805 kubeadm.go:934] updating node { 192.168.61.18 8443 v1.31.1 crio true true} ...
	I0920 22:41:06.700241   71805 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-402676 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-402676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:41:06.700343   71805 ssh_runner.go:195] Run: crio config
	I0920 22:41:06.751508   71805 cni.go:84] Creating CNI manager for ""
	I0920 22:41:06.751535   71805 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:41:06.751546   71805 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0920 22:41:06.751573   71805 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.18 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-402676 NodeName:newest-cni-402676 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.61.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:41:06.751749   71805 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-402676"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:41:06.751820   71805 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:41:06.763712   71805 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:41:06.763788   71805 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:41:06.774663   71805 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I0920 22:41:06.793645   71805 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:41:06.812569   71805 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2282 bytes)
	I0920 22:41:06.831058   71805 ssh_runner.go:195] Run: grep 192.168.61.18	control-plane.minikube.internal$ /etc/hosts
	I0920 22:41:06.835803   71805 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:41:06.850265   71805 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:41:06.998896   71805 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:41:07.021735   71805 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676 for IP: 192.168.61.18
	I0920 22:41:07.021760   71805 certs.go:194] generating shared ca certs ...
	I0920 22:41:07.021780   71805 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:41:07.021953   71805 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:41:07.022022   71805 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:41:07.022037   71805 certs.go:256] generating profile certs ...
	I0920 22:41:07.022108   71805 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/client.key
	I0920 22:41:07.022127   71805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/client.crt with IP's: []
	I0920 22:41:07.213392   71805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/client.crt ...
	I0920 22:41:07.213421   71805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/client.crt: {Name:mk816180a68dd6c38791f60d743270c7688aa52f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:41:07.213613   71805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/client.key ...
	I0920 22:41:07.213627   71805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/client.key: {Name:mk091aa138ee2b999274f79a9cb2481c49158c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:41:07.213744   71805 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/apiserver.key.d4ef285b
	I0920 22:41:07.213765   71805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/apiserver.crt.d4ef285b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.18]
	I0920 22:41:07.362728   71805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/apiserver.crt.d4ef285b ...
	I0920 22:41:07.362762   71805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/apiserver.crt.d4ef285b: {Name:mkfecaf816675c2b54dbae7bf92e30b61f6e6006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:41:07.362990   71805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/apiserver.key.d4ef285b ...
	I0920 22:41:07.363017   71805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/apiserver.key.d4ef285b: {Name:mkbcfc2a8e45a3d5eba386145e0e81da297afe7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:41:07.363119   71805 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/apiserver.crt.d4ef285b -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/apiserver.crt
	I0920 22:41:07.363191   71805 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/apiserver.key.d4ef285b -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/apiserver.key
	I0920 22:41:07.363243   71805 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/proxy-client.key
	I0920 22:41:07.363261   71805 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/proxy-client.crt with IP's: []
	I0920 22:41:07.679727   71805 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/proxy-client.crt ...
	I0920 22:41:07.679770   71805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/proxy-client.crt: {Name:mk0e4dfc06732beb88d2e814453adb310d2791f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:41:07.679934   71805 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/proxy-client.key ...
	I0920 22:41:07.679948   71805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/proxy-client.key: {Name:mk45aa9bc75382b56edcb752628ec8cf03b7c25e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:41:07.680117   71805 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:41:07.680153   71805 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:41:07.680164   71805 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:41:07.680188   71805 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:41:07.680211   71805 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:41:07.680232   71805 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:41:07.680268   71805 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:41:07.680773   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:41:07.713578   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:41:07.753880   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:41:07.782633   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:41:07.812767   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 22:41:07.871935   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:41:07.902621   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:41:07.936500   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 22:41:07.964501   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:41:07.992357   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:41:08.019806   71805 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:41:08.049844   71805 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:41:08.069071   71805 ssh_runner.go:195] Run: openssl version
	I0920 22:41:08.075199   71805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:41:08.087301   71805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:41:08.092078   71805 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:41:08.092144   71805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:41:08.098569   71805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:41:08.110128   71805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:41:08.121729   71805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:41:08.126593   71805 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:41:08.126652   71805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:41:08.133080   71805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:41:08.145148   71805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:41:08.156478   71805 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:41:08.160995   71805 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:41:08.161072   71805 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:41:08.167595   71805 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:41:08.178926   71805 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:41:08.183113   71805 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 22:41:08.183176   71805 kubeadm.go:392] StartCluster: {Name:newest-cni-402676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-402676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.18 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:41:08.183256   71805 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:41:08.183312   71805 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:41:08.223545   71805 cri.go:89] found id: ""
	I0920 22:41:08.223619   71805 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:41:08.234214   71805 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:41:08.245212   71805 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:41:08.256203   71805 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:41:08.256221   71805 kubeadm.go:157] found existing configuration files:
	
	I0920 22:41:08.256264   71805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:41:08.265737   71805 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:41:08.265794   71805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:41:08.275847   71805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:41:08.285640   71805 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:41:08.285702   71805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:41:08.298713   71805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:41:08.310274   71805 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:41:08.310341   71805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:41:08.321158   71805 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:41:08.330545   71805 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:41:08.330611   71805 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:41:08.340738   71805 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:41:08.467242   71805 kubeadm.go:310] W0920 22:41:08.420771     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:41:08.467901   71805 kubeadm.go:310] W0920 22:41:08.421751     828 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:41:08.597797   71805 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:41:11.902141   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:11.902671   72245 main.go:141] libmachine: (auto-833441) DBG | unable to find current IP address of domain auto-833441 in network mk-auto-833441
	I0920 22:41:11.902696   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:11.902623   72268 retry.go:31] will retry after 2.197017075s: waiting for machine to come up
	I0920 22:41:14.101189   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:14.101778   72245 main.go:141] libmachine: (auto-833441) DBG | unable to find current IP address of domain auto-833441 in network mk-auto-833441
	I0920 22:41:14.101804   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:14.101753   72268 retry.go:31] will retry after 2.924392051s: waiting for machine to come up
	I0920 22:41:19.642435   71805 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:41:19.642554   71805 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:41:19.642656   71805 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:41:19.642771   71805 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:41:19.642902   71805 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:41:19.643029   71805 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:41:19.644698   71805 out.go:235]   - Generating certificates and keys ...
	I0920 22:41:19.644806   71805 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:41:19.644885   71805 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:41:19.644977   71805 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 22:41:19.645051   71805 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 22:41:19.645139   71805 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 22:41:19.645257   71805 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 22:41:19.645351   71805 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 22:41:19.645503   71805 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-402676] and IPs [192.168.61.18 127.0.0.1 ::1]
	I0920 22:41:19.645578   71805 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 22:41:19.645720   71805 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-402676] and IPs [192.168.61.18 127.0.0.1 ::1]
	I0920 22:41:19.645806   71805 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 22:41:19.645903   71805 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 22:41:19.645994   71805 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 22:41:19.646085   71805 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:41:19.646166   71805 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:41:19.646251   71805 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:41:19.646338   71805 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:41:19.646416   71805 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:41:19.646511   71805 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:41:19.646632   71805 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:41:19.646726   71805 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:41:19.648357   71805 out.go:235]   - Booting up control plane ...
	I0920 22:41:19.648461   71805 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:41:19.648561   71805 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:41:19.648625   71805 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:41:19.648720   71805 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:41:19.648795   71805 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:41:19.648830   71805 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:41:19.648943   71805 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:41:19.649042   71805 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:41:19.649101   71805 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.248813ms
	I0920 22:41:19.649184   71805 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:41:19.649275   71805 kubeadm.go:310] [api-check] The API server is healthy after 5.503043252s
	I0920 22:41:19.649406   71805 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:41:19.649537   71805 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:41:19.649614   71805 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:41:19.649850   71805 kubeadm.go:310] [mark-control-plane] Marking the node newest-cni-402676 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:41:19.649922   71805 kubeadm.go:310] [bootstrap-token] Using token: 7s0o68.svplxka36557pwqd
	I0920 22:41:19.651260   71805 out.go:235]   - Configuring RBAC rules ...
	I0920 22:41:19.651374   71805 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:41:19.651493   71805 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:41:19.651665   71805 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:41:19.651788   71805 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:41:19.651919   71805 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:41:19.651997   71805 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:41:19.652098   71805 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:41:19.652136   71805 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:41:19.652178   71805 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:41:19.652184   71805 kubeadm.go:310] 
	I0920 22:41:19.652266   71805 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:41:19.652283   71805 kubeadm.go:310] 
	I0920 22:41:19.652393   71805 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:41:19.652404   71805 kubeadm.go:310] 
	I0920 22:41:19.652437   71805 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:41:19.652510   71805 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:41:19.652596   71805 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:41:19.652614   71805 kubeadm.go:310] 
	I0920 22:41:19.652698   71805 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:41:19.652711   71805 kubeadm.go:310] 
	I0920 22:41:19.652765   71805 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:41:19.652774   71805 kubeadm.go:310] 
	I0920 22:41:19.652842   71805 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:41:19.652911   71805 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:41:19.652982   71805 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:41:19.652988   71805 kubeadm.go:310] 
	I0920 22:41:19.653109   71805 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:41:19.653224   71805 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:41:19.653241   71805 kubeadm.go:310] 
	I0920 22:41:19.653374   71805 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7s0o68.svplxka36557pwqd \
	I0920 22:41:19.653525   71805 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 22:41:19.653555   71805 kubeadm.go:310] 	--control-plane 
	I0920 22:41:19.653563   71805 kubeadm.go:310] 
	I0920 22:41:19.653681   71805 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:41:19.653694   71805 kubeadm.go:310] 
	I0920 22:41:19.653792   71805 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7s0o68.svplxka36557pwqd \
	I0920 22:41:19.653890   71805 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 22:41:19.653903   71805 cni.go:84] Creating CNI manager for ""
	I0920 22:41:19.653911   71805 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:41:19.655501   71805 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:41:17.027501   72245 main.go:141] libmachine: (auto-833441) DBG | domain auto-833441 has defined MAC address 52:54:00:12:28:ae in network mk-auto-833441
	I0920 22:41:17.028090   72245 main.go:141] libmachine: (auto-833441) DBG | unable to find current IP address of domain auto-833441 in network mk-auto-833441
	I0920 22:41:17.028138   72245 main.go:141] libmachine: (auto-833441) DBG | I0920 22:41:17.028058   72268 retry.go:31] will retry after 4.282891394s: waiting for machine to come up
	I0920 22:41:19.656863   71805 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:41:19.668605   71805 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:41:19.688002   71805 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:41:19.688136   71805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:41:19.688144   71805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-402676 minikube.k8s.io/updated_at=2024_09_20T22_41_19_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=newest-cni-402676 minikube.k8s.io/primary=true
	I0920 22:41:19.721425   71805 ops.go:34] apiserver oom_adj: -16
	I0920 22:41:19.944133   71805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:41:20.444448   71805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:41:20.944869   71805 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Sep 20 22:41:21 embed-certs-562750 crio[722]: time="2024-09-20 22:41:21.973117588Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872081973095292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17dcb07e-0740-4464-abe3-106431ee9f88 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:41:21 embed-certs-562750 crio[722]: time="2024-09-20 22:41:21.973656040Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=319c1775-21f9-498b-a806-6824003406ee name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:41:21 embed-certs-562750 crio[722]: time="2024-09-20 22:41:21.973723268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=319c1775-21f9-498b-a806-6824003406ee name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:41:21 embed-certs-562750 crio[722]: time="2024-09-20 22:41:21.973976787Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726870838738106130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae255cbc7eac6c747ca8b618ded6209da5594c3cd32f6ac6642aa6c620fbd03,PodSandboxId:a9f41ee2c3cfd87823e58bf71fc648b77d12adf0f028d5470deaa29ca8612ecc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726870825817451940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 246386c0-503a-4f46-8c53-5ce47131b001,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1,PodSandboxId:50e558f13ca129fc5f09c8f0d643272f8abceaded2ea31db7ae18c7772c7e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870823697860081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-46wxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfc28b7-4bdf-4f37-82fb-59316bec0274,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640,PodSandboxId:30749e574f00b61292a12c6baf79eefa94f06e2a36efeec757da03c5fa03f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726870807932121496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wsrnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 173d3a74-5546-4d40-8
111-0607c3cbdbba,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726870807874180910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8
137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f,PodSandboxId:aaf3dad6ece4cba511ae2bd28f04f3c3de4b305261136f1c4d046f5117f790a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726870804251294049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8084b96192d2fb41fda359d2bedb8dd,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b,PodSandboxId:74ce2ef4cd63c58ee00af804bced9469b99e82ff15f5e9564a2ff4f370890567,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726870804247710234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe55b2c88ce29a7a29e5f88b95b0f49,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a,PodSandboxId:2f465f090e7f70a7c7e929c6e5e3ac3322e8608b8a5f342975d4558db4fd6ae3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726870804132686181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b94e24003ba343ba26d810925fbbdd,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097,PodSandboxId:9c4d734f964fed1c80a121b8e253c1be64cf33bbbc6c6189f8533c6aa096e5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726870804157192152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c668b42913ed1686fab257328e725b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=319c1775-21f9-498b-a806-6824003406ee name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.030635628Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15f5a4a2-118c-415b-9552-9faeb616ed61 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.030752912Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15f5a4a2-118c-415b-9552-9faeb616ed61 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.032196911Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73fb0ba9-5929-479f-96f6-2866c52b4ac4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.032808893Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872082032774221,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73fb0ba9-5929-479f-96f6-2866c52b4ac4 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.033431621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c149d4b4-a394-4471-9213-74a949a293ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.033681179Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c149d4b4-a394-4471-9213-74a949a293ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.034977111Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726870838738106130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae255cbc7eac6c747ca8b618ded6209da5594c3cd32f6ac6642aa6c620fbd03,PodSandboxId:a9f41ee2c3cfd87823e58bf71fc648b77d12adf0f028d5470deaa29ca8612ecc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726870825817451940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 246386c0-503a-4f46-8c53-5ce47131b001,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1,PodSandboxId:50e558f13ca129fc5f09c8f0d643272f8abceaded2ea31db7ae18c7772c7e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870823697860081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-46wxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfc28b7-4bdf-4f37-82fb-59316bec0274,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640,PodSandboxId:30749e574f00b61292a12c6baf79eefa94f06e2a36efeec757da03c5fa03f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726870807932121496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wsrnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 173d3a74-5546-4d40-8
111-0607c3cbdbba,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726870807874180910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8
137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f,PodSandboxId:aaf3dad6ece4cba511ae2bd28f04f3c3de4b305261136f1c4d046f5117f790a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726870804251294049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8084b96192d2fb41fda359d2bedb8dd,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b,PodSandboxId:74ce2ef4cd63c58ee00af804bced9469b99e82ff15f5e9564a2ff4f370890567,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726870804247710234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe55b2c88ce29a7a29e5f88b95b0f49,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a,PodSandboxId:2f465f090e7f70a7c7e929c6e5e3ac3322e8608b8a5f342975d4558db4fd6ae3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726870804132686181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b94e24003ba343ba26d810925fbbdd,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097,PodSandboxId:9c4d734f964fed1c80a121b8e253c1be64cf33bbbc6c6189f8533c6aa096e5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726870804157192152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c668b42913ed1686fab257328e725b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c149d4b4-a394-4471-9213-74a949a293ca name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.086518887Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=764b9227-b7d3-4c33-9d09-b9580889787a name=/runtime.v1.RuntimeService/Version
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.086634266Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=764b9227-b7d3-4c33-9d09-b9580889787a name=/runtime.v1.RuntimeService/Version
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.088456988Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=93e9fe46-ccd8-4a5b-a180-7c2cdb42b27d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.089150299Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872082089075263,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=93e9fe46-ccd8-4a5b-a180-7c2cdb42b27d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.089775673Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4906bed5-5a31-4716-bff5-177b3441522e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.089866161Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4906bed5-5a31-4716-bff5-177b3441522e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.090260759Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726870838738106130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae255cbc7eac6c747ca8b618ded6209da5594c3cd32f6ac6642aa6c620fbd03,PodSandboxId:a9f41ee2c3cfd87823e58bf71fc648b77d12adf0f028d5470deaa29ca8612ecc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726870825817451940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 246386c0-503a-4f46-8c53-5ce47131b001,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1,PodSandboxId:50e558f13ca129fc5f09c8f0d643272f8abceaded2ea31db7ae18c7772c7e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870823697860081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-46wxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfc28b7-4bdf-4f37-82fb-59316bec0274,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640,PodSandboxId:30749e574f00b61292a12c6baf79eefa94f06e2a36efeec757da03c5fa03f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726870807932121496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wsrnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 173d3a74-5546-4d40-8
111-0607c3cbdbba,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726870807874180910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8
137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f,PodSandboxId:aaf3dad6ece4cba511ae2bd28f04f3c3de4b305261136f1c4d046f5117f790a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726870804251294049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8084b96192d2fb41fda359d2bedb8dd,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b,PodSandboxId:74ce2ef4cd63c58ee00af804bced9469b99e82ff15f5e9564a2ff4f370890567,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726870804247710234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe55b2c88ce29a7a29e5f88b95b0f49,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a,PodSandboxId:2f465f090e7f70a7c7e929c6e5e3ac3322e8608b8a5f342975d4558db4fd6ae3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726870804132686181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b94e24003ba343ba26d810925fbbdd,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097,PodSandboxId:9c4d734f964fed1c80a121b8e253c1be64cf33bbbc6c6189f8533c6aa096e5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726870804157192152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c668b42913ed1686fab257328e725b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4906bed5-5a31-4716-bff5-177b3441522e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.138267021Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f25e4e32-fa30-4fab-9656-15f2f59fa2d2 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.138364263Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f25e4e32-fa30-4fab-9656-15f2f59fa2d2 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.139449507Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=084e861a-0c21-4aa5-9152-b177ec0dcc6a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.139840920Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872082139821413,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=084e861a-0c21-4aa5-9152-b177ec0dcc6a name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.140466099Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04d112b3-fa39-45e5-97a6-b2c947f4b0ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.140543358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04d112b3-fa39-45e5-97a6-b2c947f4b0ea name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:41:22 embed-certs-562750 crio[722]: time="2024-09-20 22:41:22.140738320Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726870838738106130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ae255cbc7eac6c747ca8b618ded6209da5594c3cd32f6ac6642aa6c620fbd03,PodSandboxId:a9f41ee2c3cfd87823e58bf71fc648b77d12adf0f028d5470deaa29ca8612ecc,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1726870825817451940,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 246386c0-503a-4f46-8c53-5ce47131b001,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1,PodSandboxId:50e558f13ca129fc5f09c8f0d643272f8abceaded2ea31db7ae18c7772c7e6f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726870823697860081,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-46wxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfc28b7-4bdf-4f37-82fb-59316bec0274,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640,PodSandboxId:30749e574f00b61292a12c6baf79eefa94f06e2a36efeec757da03c5fa03f98d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1726870807932121496,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wsrnw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 173d3a74-5546-4d40-8
111-0607c3cbdbba,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba,PodSandboxId:7dc9d2321abd188ea78fed502def171f607bf906f3867b53e0c21b6d8d20067c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1726870807874180910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12d59a28-0756-4027-bd21-97fdc86a8
137,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f,PodSandboxId:aaf3dad6ece4cba511ae2bd28f04f3c3de4b305261136f1c4d046f5117f790a7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726870804251294049,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8084b96192d2fb41fda359d2bedb8dd,},Annotations:map[string]string{io.kub
ernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b,PodSandboxId:74ce2ef4cd63c58ee00af804bced9469b99e82ff15f5e9564a2ff4f370890567,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726870804247710234,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fe55b2c88ce29a7a29e5f88b95b0f49,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a,PodSandboxId:2f465f090e7f70a7c7e929c6e5e3ac3322e8608b8a5f342975d4558db4fd6ae3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726870804132686181,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74b94e24003ba343ba26d810925fbbdd,},Annotations:map[string]string{io.
kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097,PodSandboxId:9c4d734f964fed1c80a121b8e253c1be64cf33bbbc6c6189f8533c6aa096e5be,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726870804157192152,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-562750,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 26c668b42913ed1686fab257328e725b,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04d112b3-fa39-45e5-97a6-b2c947f4b0ea name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4cb8dacd4796a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Running             storage-provisioner       2                   7dc9d2321abd1       storage-provisioner
	8ae255cbc7eac       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   20 minutes ago      Running             busybox                   1                   a9f41ee2c3cfd       busybox
	c6e4193ea7db3       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      20 minutes ago      Running             coredns                   1                   50e558f13ca12       coredns-7c65d6cfc9-46wxm
	e7fe3b2aa3fbf       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      21 minutes ago      Running             kube-proxy                1                   30749e574f00b       kube-proxy-wsrnw
	a70c22cc89dec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   7dc9d2321abd1       storage-provisioner
	de480c158751f       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   aaf3dad6ece4c       etcd-embed-certs-562750
	80549c2470100       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      21 minutes ago      Running             kube-scheduler            1                   74ce2ef4cd63c       kube-scheduler-embed-certs-562750
	15ee26017cb99       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      21 minutes ago      Running             kube-apiserver            1                   9c4d734f964fe       kube-apiserver-embed-certs-562750
	065928d7289f2       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      21 minutes ago      Running             kube-controller-manager   1                   2f465f090e7f7       kube-controller-manager-embed-certs-562750
	
	
	==> coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43434 - 44150 "HINFO IN 3449325025002424531.9216361121714409084. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009921692s
	
	
	==> describe nodes <==
	Name:               embed-certs-562750
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-562750
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=embed-certs-562750
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T22_12_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 22:12:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-562750
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 22:41:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 22:41:02 +0000   Fri, 20 Sep 2024 22:12:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 22:41:02 +0000   Fri, 20 Sep 2024 22:12:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 22:41:02 +0000   Fri, 20 Sep 2024 22:12:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 22:41:02 +0000   Fri, 20 Sep 2024 22:20:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.193
	  Hostname:    embed-certs-562750
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 92775a493d91427b88236f272b18a57d
	  System UUID:                92775a49-3d91-427b-8823-6f272b18a57d
	  Boot ID:                    81af2517-1757-4a57-870d-ac70eddc1f48
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-46wxm                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-embed-certs-562750                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-embed-certs-562750             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-embed-certs-562750    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-wsrnw                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-embed-certs-562750             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-4j9zb               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m                kubelet          Node embed-certs-562750 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node embed-certs-562750 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node embed-certs-562750 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node embed-certs-562750 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node embed-certs-562750 event: Registered Node embed-certs-562750 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-562750 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-562750 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-562750 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-562750 event: Registered Node embed-certs-562750 in Controller
	
	
	==> dmesg <==
	[Sep20 22:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051007] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040318] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.825295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.556072] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.664362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.829686] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.060505] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056819] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.182364] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.146152] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +0.329664] systemd-fstab-generator[711]: Ignoring "noauto" option for root device
	[Sep20 22:20] systemd-fstab-generator[804]: Ignoring "noauto" option for root device
	[  +0.066860] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.126504] systemd-fstab-generator[925]: Ignoring "noauto" option for root device
	[  +4.591425] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.975068] systemd-fstab-generator[1556]: Ignoring "noauto" option for root device
	[  +3.760248] kauditd_printk_skb: 64 callbacks suppressed
	[  +8.824850] kauditd_printk_skb: 11 callbacks suppressed
	[ +15.427443] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] <==
	{"level":"info","ts":"2024-09-20T22:20:06.101735Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T22:20:06.101779Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T22:20:06.102521Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:20:06.102521Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:20:06.103384Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.193:2379"}
	{"level":"info","ts":"2024-09-20T22:20:06.103677Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-20T22:20:22.468240Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"208.679214ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6703768649290275979 > lease_revoke:<id:5d0892117e1f446b>","response":"size:28"}
	{"level":"warn","ts":"2024-09-20T22:20:22.720807Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.134463ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6703768649290275980 > lease_revoke:<id:5d0892117e1f4414>","response":"size:28"}
	{"level":"info","ts":"2024-09-20T22:20:22.721035Z","caller":"traceutil/trace.go:171","msg":"trace[38428455] linearizableReadLoop","detail":"{readStateIndex:650; appliedIndex:649; }","duration":"236.850495ms","start":"2024-09-20T22:20:22.484160Z","end":"2024-09-20T22:20:22.721010Z","steps":["trace[38428455] 'read index received'  (duration: 40.114µs)","trace[38428455] 'applied index is now lower than readState.Index'  (duration: 236.808259ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T22:20:22.721089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.77103ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T22:20:22.721150Z","caller":"traceutil/trace.go:171","msg":"trace[2032221090] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:607; }","duration":"210.84654ms","start":"2024-09-20T22:20:22.510294Z","end":"2024-09-20T22:20:22.721140Z","steps":["trace[2032221090] 'range keys from in-memory index tree'  (duration: 210.762891ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T22:20:22.721291Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"237.122429ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7c65d6cfc9-46wxm\" ","response":"range_response_count:1 size:4949"}
	{"level":"info","ts":"2024-09-20T22:20:22.721358Z","caller":"traceutil/trace.go:171","msg":"trace[1691729838] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7c65d6cfc9-46wxm; range_end:; response_count:1; response_revision:607; }","duration":"237.211531ms","start":"2024-09-20T22:20:22.484137Z","end":"2024-09-20T22:20:22.721349Z","steps":["trace[1691729838] 'agreement among raft nodes before linearized reading'  (duration: 236.966969ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:20:23.479208Z","caller":"traceutil/trace.go:171","msg":"trace[791540973] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"158.739675ms","start":"2024-09-20T22:20:23.320448Z","end":"2024-09-20T22:20:23.479187Z","steps":["trace[791540973] 'process raft request'  (duration: 158.584995ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:20:42.662373Z","caller":"traceutil/trace.go:171","msg":"trace[1807087431] transaction","detail":"{read_only:false; response_revision:644; number_of_response:1; }","duration":"124.678178ms","start":"2024-09-20T22:20:42.537669Z","end":"2024-09-20T22:20:42.662348Z","steps":["trace[1807087431] 'process raft request'  (duration: 124.464415ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:20:42.688737Z","caller":"traceutil/trace.go:171","msg":"trace[368319078] transaction","detail":"{read_only:false; response_revision:645; number_of_response:1; }","duration":"148.197568ms","start":"2024-09-20T22:20:42.540522Z","end":"2024-09-20T22:20:42.688719Z","steps":["trace[368319078] 'process raft request'  (duration: 147.670156ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:30:06.132668Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":870}
	{"level":"info","ts":"2024-09-20T22:30:06.142408Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":870,"took":"9.415723ms","hash":3562258311,"current-db-size-bytes":2826240,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2826240,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-09-20T22:30:06.142466Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3562258311,"revision":870,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T22:35:06.141721Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1112}
	{"level":"info","ts":"2024-09-20T22:35:06.146030Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1112,"took":"3.87322ms","hash":3749723303,"current-db-size-bytes":2826240,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1658880,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-20T22:35:06.146095Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3749723303,"revision":1112,"compact-revision":870}
	{"level":"info","ts":"2024-09-20T22:40:06.148841Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1356}
	{"level":"info","ts":"2024-09-20T22:40:06.152826Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1356,"took":"3.38677ms","hash":1457335739,"current-db-size-bytes":2826240,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1626112,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-09-20T22:40:06.153009Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1457335739,"revision":1356,"compact-revision":1112}
	
	
	==> kernel <==
	 22:41:22 up 21 min,  0 users,  load average: 0.12, 0.13, 0.09
	Linux embed-certs-562750 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] <==
	I0920 22:38:08.437585       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:38:08.437590       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 22:40:07.433167       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:40:07.433553       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 22:40:08.435643       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:40:08.435863       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 22:40:08.435742       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:40:08.436051       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 22:40:08.437220       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:40:08.437262       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 22:41:08.438503       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:41:08.438605       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 22:41:08.438668       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:41:08.438720       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 22:41:08.439979       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:41:08.440127       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] <==
	E0920 22:36:11.171501       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:36:11.674283       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:36:29.556265       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="200.907µs"
	E0920 22:36:41.178070       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:36:41.681814       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:36:43.549736       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="175.857µs"
	E0920 22:37:11.186040       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:37:11.689877       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:37:41.192724       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:37:41.698538       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:38:11.200381       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:38:11.708755       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:38:41.206527       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:38:41.715650       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:39:11.213580       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:39:11.723772       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:39:41.220190       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:39:41.733126       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:40:11.226666       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:40:11.742875       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:40:41.235298       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:40:41.752649       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:41:02.985326       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-562750"
	E0920 22:41:11.244091       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:41:11.762056       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 22:20:08.147523       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 22:20:08.164533       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.193"]
	E0920 22:20:08.164952       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 22:20:08.206011       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 22:20:08.206087       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 22:20:08.206120       1 server_linux.go:169] "Using iptables Proxier"
	I0920 22:20:08.209209       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 22:20:08.209746       1 server.go:483] "Version info" version="v1.31.1"
	I0920 22:20:08.209781       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 22:20:08.212292       1 config.go:199] "Starting service config controller"
	I0920 22:20:08.212351       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 22:20:08.212396       1 config.go:105] "Starting endpoint slice config controller"
	I0920 22:20:08.212423       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 22:20:08.215693       1 config.go:328] "Starting node config controller"
	I0920 22:20:08.215758       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 22:20:08.312821       1 shared_informer.go:320] Caches are synced for service config
	I0920 22:20:08.312719       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 22:20:08.316568       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] <==
	I0920 22:20:05.442204       1 serving.go:386] Generated self-signed cert in-memory
	W0920 22:20:07.346260       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 22:20:07.346352       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 22:20:07.346380       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 22:20:07.346404       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 22:20:07.406543       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0920 22:20:07.408957       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 22:20:07.413726       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0920 22:20:07.415771       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0920 22:20:07.420782       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 22:20:07.415810       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0920 22:20:07.524092       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 22:40:13 embed-certs-562750 kubelet[932]: E0920 22:40:13.860301     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872013859843938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:13 embed-certs-562750 kubelet[932]: E0920 22:40:13.860793     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872013859843938,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:17 embed-certs-562750 kubelet[932]: E0920 22:40:17.533208     932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4j9zb" podUID="cce79400-7958-4b2f-bcb0-2e59046134cd"
	Sep 20 22:40:23 embed-certs-562750 kubelet[932]: E0920 22:40:23.863160     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872023862448679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:23 embed-certs-562750 kubelet[932]: E0920 22:40:23.863570     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872023862448679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:28 embed-certs-562750 kubelet[932]: E0920 22:40:28.533333     932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4j9zb" podUID="cce79400-7958-4b2f-bcb0-2e59046134cd"
	Sep 20 22:40:33 embed-certs-562750 kubelet[932]: E0920 22:40:33.865791     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872033865328976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:33 embed-certs-562750 kubelet[932]: E0920 22:40:33.866363     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872033865328976,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:40 embed-certs-562750 kubelet[932]: E0920 22:40:40.533448     932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4j9zb" podUID="cce79400-7958-4b2f-bcb0-2e59046134cd"
	Sep 20 22:40:43 embed-certs-562750 kubelet[932]: E0920 22:40:43.879559     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872043869185275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:43 embed-certs-562750 kubelet[932]: E0920 22:40:43.880481     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872043869185275,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:53 embed-certs-562750 kubelet[932]: E0920 22:40:53.534646     932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4j9zb" podUID="cce79400-7958-4b2f-bcb0-2e59046134cd"
	Sep 20 22:40:53 embed-certs-562750 kubelet[932]: E0920 22:40:53.883762     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872053883210830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:53 embed-certs-562750 kubelet[932]: E0920 22:40:53.883805     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872053883210830,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:41:03 embed-certs-562750 kubelet[932]: E0920 22:41:03.555137     932 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 22:41:03 embed-certs-562750 kubelet[932]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 22:41:03 embed-certs-562750 kubelet[932]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 22:41:03 embed-certs-562750 kubelet[932]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 22:41:03 embed-certs-562750 kubelet[932]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 22:41:03 embed-certs-562750 kubelet[932]: E0920 22:41:03.888603     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872063887731021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:41:03 embed-certs-562750 kubelet[932]: E0920 22:41:03.888639     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872063887731021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:41:04 embed-certs-562750 kubelet[932]: E0920 22:41:04.533510     932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4j9zb" podUID="cce79400-7958-4b2f-bcb0-2e59046134cd"
	Sep 20 22:41:13 embed-certs-562750 kubelet[932]: E0920 22:41:13.891068     932 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872073890469778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:41:13 embed-certs-562750 kubelet[932]: E0920 22:41:13.891134     932 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872073890469778,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:41:15 embed-certs-562750 kubelet[932]: E0920 22:41:15.534380     932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-4j9zb" podUID="cce79400-7958-4b2f-bcb0-2e59046134cd"
	
	
	==> storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] <==
	I0920 22:20:38.870805       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 22:20:38.887714       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 22:20:38.887822       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 22:20:56.299861       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 22:20:56.300108       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-562750_d63845e1-eff3-437a-ae7f-6dc114f3e5f6!
	I0920 22:20:56.301124       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2510ee1-0d87-4e10-896d-335b7aa732be", APIVersion:"v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-562750_d63845e1-eff3-437a-ae7f-6dc114f3e5f6 became leader
	I0920 22:20:56.401157       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-562750_d63845e1-eff3-437a-ae7f-6dc114f3e5f6!
	
	
	==> storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] <==
	I0920 22:20:07.991211       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0920 22:20:37.994647       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-562750 -n embed-certs-562750
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-562750 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-4j9zb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-562750 describe pod metrics-server-6867b74b74-4j9zb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-562750 describe pod metrics-server-6867b74b74-4j9zb: exit status 1 (65.774421ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-4j9zb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-562750 describe pod metrics-server-6867b74b74-4j9zb: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (461.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (514.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799783 -n default-k8s-diff-port-799783
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-20 22:43:07.757388371 +0000 UTC m=+6927.888523349
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-799783 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-799783 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.15µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-799783 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799783 -n default-k8s-diff-port-799783
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-799783 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-799783 logs -n 25: (1.335373005s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-833441 sudo                               | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | systemctl status kubelet --all                       |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo                               | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | systemctl cat kubelet                                |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo                               | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | journalctl -xeu kubelet --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo cat                           | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo cat                           | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo                               | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo                               | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo cat                           | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo docker                        | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo                               | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo                               | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo cat                           | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo cat                           | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo                               | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo                               | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo                               | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo cat                           | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo cat                           | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo                               | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:42 UTC | 20 Sep 24 22:42 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo                               | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:43 UTC | 20 Sep 24 22:43 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo                               | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:43 UTC | 20 Sep 24 22:43 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo find                          | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:43 UTC | 20 Sep 24 22:43 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-833441 sudo crio                          | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:43 UTC | 20 Sep 24 22:43 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-833441                                    | kindnet-833441            | jenkins | v1.34.0 | 20 Sep 24 22:43 UTC | 20 Sep 24 22:43 UTC |
	| start   | -p enable-default-cni-833441                         | enable-default-cni-833441 | jenkins | v1.34.0 | 20 Sep 24 22:43 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 22:43:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 22:43:02.370950   77059 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:43:02.371222   77059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:43:02.371231   77059 out.go:358] Setting ErrFile to fd 2...
	I0920 22:43:02.371236   77059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:43:02.371412   77059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:43:02.371974   77059 out.go:352] Setting JSON to false
	I0920 22:43:02.373014   77059 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8730,"bootTime":1726863452,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:43:02.373110   77059 start.go:139] virtualization: kvm guest
	I0920 22:43:02.375474   77059 out.go:177] * [enable-default-cni-833441] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:43:02.377108   77059 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:43:02.377139   77059 notify.go:220] Checking for updates...
	I0920 22:43:02.379805   77059 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:43:02.381075   77059 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:43:02.382388   77059 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:43:02.383658   77059 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:43:02.384982   77059 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:43:02.387218   77059 config.go:182] Loaded profile config "calico-833441": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:43:02.387401   77059 config.go:182] Loaded profile config "custom-flannel-833441": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:43:02.387539   77059 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:43:02.387726   77059 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:43:02.431884   77059 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 22:43:02.433136   77059 start.go:297] selected driver: kvm2
	I0920 22:43:02.433169   77059 start.go:901] validating driver "kvm2" against <nil>
	I0920 22:43:02.433187   77059 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:43:02.434252   77059 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:43:02.434375   77059 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 22:43:02.452507   77059 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 22:43:02.452577   77059 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E0920 22:43:02.452969   77059 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0920 22:43:02.453016   77059 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:43:02.453066   77059 cni.go:84] Creating CNI manager for "bridge"
	I0920 22:43:02.453077   77059 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 22:43:02.453180   77059 start.go:340] cluster config:
	{Name:enable-default-cni-833441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-833441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:43:02.453326   77059 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:43:02.455398   77059 out.go:177] * Starting "enable-default-cni-833441" primary control-plane node in "enable-default-cni-833441" cluster
	I0920 22:43:02.078997   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | domain custom-flannel-833441 has defined MAC address 52:54:00:c3:d9:c2 in network mk-custom-flannel-833441
	I0920 22:43:02.079480   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | unable to find current IP address of domain custom-flannel-833441 in network mk-custom-flannel-833441
	I0920 22:43:02.079500   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | I0920 22:43:02.079441   76589 retry.go:31] will retry after 479.985951ms: waiting for machine to come up
	I0920 22:43:02.561331   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | domain custom-flannel-833441 has defined MAC address 52:54:00:c3:d9:c2 in network mk-custom-flannel-833441
	I0920 22:43:02.561904   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | unable to find current IP address of domain custom-flannel-833441 in network mk-custom-flannel-833441
	I0920 22:43:02.561927   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | I0920 22:43:02.561870   76589 retry.go:31] will retry after 687.996693ms: waiting for machine to come up
	I0920 22:43:03.252005   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | domain custom-flannel-833441 has defined MAC address 52:54:00:c3:d9:c2 in network mk-custom-flannel-833441
	I0920 22:43:03.252451   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | unable to find current IP address of domain custom-flannel-833441 in network mk-custom-flannel-833441
	I0920 22:43:03.252489   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | I0920 22:43:03.252413   76589 retry.go:31] will retry after 693.72128ms: waiting for machine to come up
	I0920 22:43:03.947393   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | domain custom-flannel-833441 has defined MAC address 52:54:00:c3:d9:c2 in network mk-custom-flannel-833441
	I0920 22:43:03.947817   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | unable to find current IP address of domain custom-flannel-833441 in network mk-custom-flannel-833441
	I0920 22:43:03.947850   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | I0920 22:43:03.947757   76589 retry.go:31] will retry after 929.600129ms: waiting for machine to come up
	I0920 22:43:04.878605   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | domain custom-flannel-833441 has defined MAC address 52:54:00:c3:d9:c2 in network mk-custom-flannel-833441
	I0920 22:43:04.879192   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | unable to find current IP address of domain custom-flannel-833441 in network mk-custom-flannel-833441
	I0920 22:43:04.879216   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | I0920 22:43:04.879142   76589 retry.go:31] will retry after 1.278882458s: waiting for machine to come up
	I0920 22:43:06.159127   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | domain custom-flannel-833441 has defined MAC address 52:54:00:c3:d9:c2 in network mk-custom-flannel-833441
	I0920 22:43:06.159656   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | unable to find current IP address of domain custom-flannel-833441 in network mk-custom-flannel-833441
	I0920 22:43:06.159692   75601 main.go:141] libmachine: (custom-flannel-833441) DBG | I0920 22:43:06.159600   76589 retry.go:31] will retry after 1.734618962s: waiting for machine to come up
	I0920 22:43:04.161180   74025 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.368467563s)
	I0920 22:43:04.161212   74025 crio.go:469] duration metric: took 2.368589155s to extract the tarball
	I0920 22:43:04.161220   74025 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:43:04.204633   74025 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:43:04.248557   74025 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:43:04.248586   74025 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:43:04.248594   74025 kubeadm.go:934] updating node { 192.168.61.75 8443 v1.31.1 crio true true} ...
	I0920 22:43:04.248702   74025 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-833441 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.75
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:calico-833441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0920 22:43:04.248801   74025 ssh_runner.go:195] Run: crio config
	I0920 22:43:04.300967   74025 cni.go:84] Creating CNI manager for "calico"
	I0920 22:43:04.300992   74025 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:43:04.301020   74025 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.75 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-833441 NodeName:calico-833441 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.75"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.75 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:43:04.301178   74025 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.75
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-833441"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.75
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.75"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:43:04.301246   74025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:43:04.314945   74025 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:43:04.315043   74025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:43:04.328002   74025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 22:43:04.346331   74025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:43:04.363791   74025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0920 22:43:04.380642   74025 ssh_runner.go:195] Run: grep 192.168.61.75	control-plane.minikube.internal$ /etc/hosts
	I0920 22:43:04.384749   74025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.75	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:43:04.397697   74025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:43:04.510902   74025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:43:04.527737   74025 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441 for IP: 192.168.61.75
	I0920 22:43:04.527763   74025 certs.go:194] generating shared ca certs ...
	I0920 22:43:04.527783   74025 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:43:04.527962   74025 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:43:04.528028   74025 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:43:04.528041   74025 certs.go:256] generating profile certs ...
	I0920 22:43:04.528131   74025 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/client.key
	I0920 22:43:04.528151   74025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/client.crt with IP's: []
	I0920 22:43:04.756041   74025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/client.crt ...
	I0920 22:43:04.756092   74025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/client.crt: {Name:mk8b754e73ca5dd1a1ffc39f379c8c45c4abff77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:43:04.756314   74025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/client.key ...
	I0920 22:43:04.756337   74025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/client.key: {Name:mk9f2d2bc3e3d5bc5ba041eca018b54854954a3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:43:04.756473   74025 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/apiserver.key.196fb5d3
	I0920 22:43:04.756497   74025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/apiserver.crt.196fb5d3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.75]
	I0920 22:43:05.322009   74025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/apiserver.crt.196fb5d3 ...
	I0920 22:43:05.322046   74025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/apiserver.crt.196fb5d3: {Name:mkf0aa5ae638f6a5769945822a4c64c24e05b2ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:43:05.322240   74025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/apiserver.key.196fb5d3 ...
	I0920 22:43:05.322258   74025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/apiserver.key.196fb5d3: {Name:mk61c15acf989fd2cd08e6c9ceb562f53259ab42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:43:05.322359   74025 certs.go:381] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/apiserver.crt.196fb5d3 -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/apiserver.crt
	I0920 22:43:05.322436   74025 certs.go:385] copying /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/apiserver.key.196fb5d3 -> /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/apiserver.key
	I0920 22:43:05.322487   74025 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/proxy-client.key
	I0920 22:43:05.322501   74025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/proxy-client.crt with IP's: []
	I0920 22:43:05.713372   74025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/proxy-client.crt ...
	I0920 22:43:05.713404   74025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/proxy-client.crt: {Name:mkc646acc820fdf2a8c9351a29042340a5d0a1fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:43:05.713589   74025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/proxy-client.key ...
	I0920 22:43:05.713603   74025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/proxy-client.key: {Name:mkb7b941a4d77d16b6fcebc217508c72690ac84a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:43:05.713812   74025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:43:05.713848   74025 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:43:05.713858   74025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:43:05.713877   74025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:43:05.713898   74025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:43:05.713919   74025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:43:05.713963   74025 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:43:05.714624   74025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:43:05.747860   74025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:43:05.776179   74025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:43:05.802901   74025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:43:05.828343   74025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 22:43:05.854072   74025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:43:05.880760   74025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:43:05.909833   74025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/calico-833441/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:43:05.939678   74025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:43:05.968711   74025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:43:05.995133   74025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:43:06.023782   74025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:43:06.042296   74025 ssh_runner.go:195] Run: openssl version
	I0920 22:43:06.048970   74025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:43:06.061703   74025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:43:06.067868   74025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:43:06.067936   74025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:43:06.076126   74025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:43:06.088002   74025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:43:06.100524   74025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:43:06.105617   74025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:43:06.105684   74025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:43:06.112088   74025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:43:06.128361   74025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:43:06.146317   74025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:43:06.153792   74025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:43:06.153853   74025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:43:06.162882   74025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:43:06.180936   74025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:43:06.189555   74025 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 22:43:06.189630   74025 kubeadm.go:392] StartCluster: {Name:calico-833441 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:calico-833441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.61.75 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:43:06.189741   74025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:43:06.189802   74025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:43:06.240923   74025 cri.go:89] found id: ""
	I0920 22:43:06.240985   74025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:43:06.251496   74025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:43:06.261929   74025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:43:06.272283   74025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:43:06.272305   74025 kubeadm.go:157] found existing configuration files:
	
	I0920 22:43:06.272360   74025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:43:06.284095   74025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:43:06.284171   74025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:43:06.294565   74025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:43:06.305868   74025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:43:06.305941   74025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:43:06.317837   74025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:43:06.328307   74025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:43:06.328380   74025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:43:06.338899   74025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:43:06.349124   74025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:43:06.349194   74025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:43:06.360565   74025 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:43:06.420413   74025 kubeadm.go:310] W0920 22:43:06.404500     845 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:43:06.422082   74025 kubeadm.go:310] W0920 22:43:06.406427     845 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:43:06.539670   74025 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:43:02.456863   77059 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:43:02.456921   77059 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 22:43:02.456933   77059 cache.go:56] Caching tarball of preloaded images
	I0920 22:43:02.457047   77059 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 22:43:02.457061   77059 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 22:43:02.457191   77059 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/enable-default-cni-833441/config.json ...
	I0920 22:43:02.457222   77059 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/enable-default-cni-833441/config.json: {Name:mkf9916d5bd3b6db593ef91bafc89e0473dd6479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:43:02.457410   77059 start.go:360] acquireMachinesLock for enable-default-cni-833441: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	
	==> CRI-O <==
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.399190718Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e3eff4d2c7e833ce99811d19863712dc9218fa5c7da4df113915eac70a30ff0d,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-lgvzf,Uid:58852f70-36e8-4972-8a6f-b04d8fc9d9f3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871122489845018,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-lgvzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58852f70-36e8-4972-8a6f-b04d8fc9d9f3,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T22:25:20.674551018Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:89b8eb6393180386d1068612123c3527aff06837e994a95acac08e206875d436,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-dpwmm,Uid:6bd8b8dc-b176-4d2a-affc-63e02eb9fba3,Namesp
ace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871122454528986,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpwmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6bd8b8dc-b176-4d2a-affc-63e02eb9fba3,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T22:25:20.647689586Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6f5c8e51a2ef68a2746ae161584db4924926f77b12726e11af1a34d1684a163d,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-pjcvw,Uid:0b1b708c-664e-4d6c-8da2-79941793d8d9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871121989796644,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-pjcvw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b1b708c-664e-4d6c-8da2-79941793d8d9,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Ann
otations:map[string]string{kubernetes.io/config.seen: 2024-09-20T22:25:21.680316847Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:33c77c1e886c9418d62fd884d2087bdab7773df0cb69188a588f1e7b924b119f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6149617c-dd2c-4ac4-ad1b-80f4943d966f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871121860985808,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6149617c-dd2c-4ac4-ad1b-80f4943d966f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"con
tainers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-09-20T22:25:21.554463368Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c9eca2c35d9a24badbe9e21a37be61baaaf51002c2e16c94c7f3312135abee40,Metadata:&PodSandboxMetadata{Name:kube-proxy-h7j48,Uid:0fe6b074-a959-4f74-a981-e2303126c3ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871121050604737,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h7j48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe6b074-a959-4f74-a981-e2303126c3ab,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-09-20T22:25:20.134910699Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b57a3588282b93b88f8b9d3df3776c08df62a86d4f582af445427c6bb37de918,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-799783,Uid:791c5d07cde00f129109d7f097fa6c8d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871109838025280,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791c5d07cde00f129109d7f097fa6c8d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 791c5d07cde00f129109d7f097fa6c8d,kubernetes.io/config.seen: 2024-09-20T22:25:09.386361520Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6cbd7d5eb20b847c4f1bd9ab3ba3d56ce8c6966ff7be4bb52ef63a7338c53fb3,Metadata:&PodSandb
oxMetadata{Name:etcd-default-k8s-diff-port-799783,Uid:e058f700d55f306a119fa5fc03f0b47b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871109836426013,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e058f700d55f306a119fa5fc03f0b47b,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.94:2379,kubernetes.io/config.hash: e058f700d55f306a119fa5fc03f0b47b,kubernetes.io/config.seen: 2024-09-20T22:25:09.386355520Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:21f494ed38130dd0a2f327b2bb9b00d922c1d889149bc6f0756b9cea5b13fca5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-799783,Uid:1cdc3434602c81ab716edc86ab6b373f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1726871109830054009,Labels:map[string]string{component
: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdc3434602c81ab716edc86ab6b373f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1cdc3434602c81ab716edc86ab6b373f,kubernetes.io/config.seen: 2024-09-20T22:25:09.386360643Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0ebb4eae3e66091366c1a2c7919095a9c11ca345d076250459751ba7d04ed968,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-799783,Uid:ebbf246f93c95e2a5453c35b524e0042,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1726871109828715501,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,tier: control-plane,},Annotations:map
[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.94:8444,kubernetes.io/config.hash: ebbf246f93c95e2a5453c35b524e0042,kubernetes.io/config.seen: 2024-09-20T22:25:09.386359343Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ba465b6942cf4804f0c0d206188e77fd77baabf99d6cb17dcdbb0d06649157c0,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-799783,Uid:ebbf246f93c95e2a5453c35b524e0042,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1726870823921111606,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.94:8444,kubernetes.io/config.hash: ebbf246f93c95e2a5453c35b524e0042,kubernetes.io/config.seen
: 2024-09-20T22:20:23.352723776Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=9233febe-abda-4410-b57e-a17ef8d1461b name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.400215392Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd804574-a182-495a-9efa-c7cd7edfb6dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.400393612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd804574-a182-495a-9efa-c7cd7edfb6dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.403881862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52423b7dc42cb47e4e10213716656ff74ac116d4d366261d104f2b943e25563e,PodSandboxId:e3eff4d2c7e833ce99811d19863712dc9218fa5c7da4df113915eac70a30ff0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122783209017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lgvzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58852f70-36e8-4972-8a6f-b04d8fc9d9f3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dde9daf56a3dc9754f580c4c18d7b55c4119f2233bf60e39b0ded71b7445a6f,PodSandboxId:89b8eb6393180386d1068612123c3527aff06837e994a95acac08e206875d436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122744407520,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpwmm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6bd8b8dc-b176-4d2a-affc-63e02eb9fba3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad9d02a56da37f2e49c89945e9bf78fdfacce940fa2696eecd3766aea79dfc9,PodSandboxId:33c77c1e886c9418d62fd884d2087bdab7773df0cb69188a588f1e7b924b119f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1726871121973030444,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6149617c-dd2c-4ac4-ad1b-80f4943d966f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7763d880d6ff2c443b413ce67006c4b428790b605c2701d6a0b4e56ad863a29,PodSandboxId:c9eca2c35d9a24badbe9e21a37be61baaaf51002c2e16c94c7f3312135abee40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726871121226913532,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7j48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe6b074-a959-4f74-a981-e2303126c3ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dedf7017026a82096fb601bd58964a90661461f66edc0d9182c4c565be292a8,PodSandboxId:6cbd7d5eb20b847c4f1bd9ab3ba3d56ce8c6966ff7be4bb52ef63a7338c53fb3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871110057623483,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e058f700d55f306a119fa5fc03f0b47b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129346b80ec6cb39e996b231d3ceab514c2ce605eec152adbb0c64841a85fce3,PodSandboxId:0ebb4eae3e66091366c1a2c7919095a9c11ca345d076250459751ba7d04ed968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871110072516812,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b75c462fb57f6fca71bdbd88dc4e157f8f9b18582ae2acd1027d8c57a55554,PodSandboxId:21f494ed38130dd0a2f327b2bb9b00d922c1d889149bc6f0756b9cea5b13fca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871110052217362,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdc3434602c81ab716edc86ab6b373f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1caaaa9c87352d4dd907694515d0359d7400a29dd5906a2818cf2701de42ffb5,PodSandboxId:b57a3588282b93b88f8b9d3df3776c08df62a86d4f582af445427c6bb37de918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871110000450286,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791c5d07cde00f129109d7f097fa6c8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f7491b6b7f7ebbbbbe0336334918ae99736d6384c7ca8034015683366c74a3,PodSandboxId:ba465b6942cf4804f0c0d206188e77fd77baabf99d6cb17dcdbb0d06649157c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870824133680588,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fd804574-a182-495a-9efa-c7cd7edfb6dd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.421461115Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e52cfd5f-d5ed-4a4c-bdb8-f8894e77e350 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.421539642Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e52cfd5f-d5ed-4a4c-bdb8-f8894e77e350 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.422584801Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1b0891a2-a9e7-41ac-aa42-550bdf1fb947 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.422969520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872188422948088,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b0891a2-a9e7-41ac-aa42-550bdf1fb947 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.423585126Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=96564d19-adce-4334-b6ad-1ae3630fdfa3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.423637230Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=96564d19-adce-4334-b6ad-1ae3630fdfa3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.423835462Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52423b7dc42cb47e4e10213716656ff74ac116d4d366261d104f2b943e25563e,PodSandboxId:e3eff4d2c7e833ce99811d19863712dc9218fa5c7da4df113915eac70a30ff0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122783209017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lgvzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58852f70-36e8-4972-8a6f-b04d8fc9d9f3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dde9daf56a3dc9754f580c4c18d7b55c4119f2233bf60e39b0ded71b7445a6f,PodSandboxId:89b8eb6393180386d1068612123c3527aff06837e994a95acac08e206875d436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122744407520,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpwmm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6bd8b8dc-b176-4d2a-affc-63e02eb9fba3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad9d02a56da37f2e49c89945e9bf78fdfacce940fa2696eecd3766aea79dfc9,PodSandboxId:33c77c1e886c9418d62fd884d2087bdab7773df0cb69188a588f1e7b924b119f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1726871121973030444,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6149617c-dd2c-4ac4-ad1b-80f4943d966f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7763d880d6ff2c443b413ce67006c4b428790b605c2701d6a0b4e56ad863a29,PodSandboxId:c9eca2c35d9a24badbe9e21a37be61baaaf51002c2e16c94c7f3312135abee40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726871121226913532,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7j48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe6b074-a959-4f74-a981-e2303126c3ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dedf7017026a82096fb601bd58964a90661461f66edc0d9182c4c565be292a8,PodSandboxId:6cbd7d5eb20b847c4f1bd9ab3ba3d56ce8c6966ff7be4bb52ef63a7338c53fb3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871110057623483,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e058f700d55f306a119fa5fc03f0b47b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129346b80ec6cb39e996b231d3ceab514c2ce605eec152adbb0c64841a85fce3,PodSandboxId:0ebb4eae3e66091366c1a2c7919095a9c11ca345d076250459751ba7d04ed968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871110072516812,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b75c462fb57f6fca71bdbd88dc4e157f8f9b18582ae2acd1027d8c57a55554,PodSandboxId:21f494ed38130dd0a2f327b2bb9b00d922c1d889149bc6f0756b9cea5b13fca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871110052217362,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdc3434602c81ab716edc86ab6b373f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1caaaa9c87352d4dd907694515d0359d7400a29dd5906a2818cf2701de42ffb5,PodSandboxId:b57a3588282b93b88f8b9d3df3776c08df62a86d4f582af445427c6bb37de918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871110000450286,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791c5d07cde00f129109d7f097fa6c8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f7491b6b7f7ebbbbbe0336334918ae99736d6384c7ca8034015683366c74a3,PodSandboxId:ba465b6942cf4804f0c0d206188e77fd77baabf99d6cb17dcdbb0d06649157c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870824133680588,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96564d19-adce-4334-b6ad-1ae3630fdfa3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.473200661Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3185dc36-2f8b-4a4a-92a9-476b68cce994 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.473319226Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3185dc36-2f8b-4a4a-92a9-476b68cce994 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.474582996Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25450c2d-2748-4366-8fa9-b5356b350363 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.474978299Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872188474955037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25450c2d-2748-4366-8fa9-b5356b350363 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.475727180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8b6a2650-4efd-42a3-9e38-e5c6f43d00bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.475828819Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8b6a2650-4efd-42a3-9e38-e5c6f43d00bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.476124117Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52423b7dc42cb47e4e10213716656ff74ac116d4d366261d104f2b943e25563e,PodSandboxId:e3eff4d2c7e833ce99811d19863712dc9218fa5c7da4df113915eac70a30ff0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122783209017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lgvzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58852f70-36e8-4972-8a6f-b04d8fc9d9f3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dde9daf56a3dc9754f580c4c18d7b55c4119f2233bf60e39b0ded71b7445a6f,PodSandboxId:89b8eb6393180386d1068612123c3527aff06837e994a95acac08e206875d436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122744407520,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpwmm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6bd8b8dc-b176-4d2a-affc-63e02eb9fba3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad9d02a56da37f2e49c89945e9bf78fdfacce940fa2696eecd3766aea79dfc9,PodSandboxId:33c77c1e886c9418d62fd884d2087bdab7773df0cb69188a588f1e7b924b119f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1726871121973030444,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6149617c-dd2c-4ac4-ad1b-80f4943d966f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7763d880d6ff2c443b413ce67006c4b428790b605c2701d6a0b4e56ad863a29,PodSandboxId:c9eca2c35d9a24badbe9e21a37be61baaaf51002c2e16c94c7f3312135abee40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726871121226913532,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7j48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe6b074-a959-4f74-a981-e2303126c3ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dedf7017026a82096fb601bd58964a90661461f66edc0d9182c4c565be292a8,PodSandboxId:6cbd7d5eb20b847c4f1bd9ab3ba3d56ce8c6966ff7be4bb52ef63a7338c53fb3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871110057623483,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e058f700d55f306a119fa5fc03f0b47b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129346b80ec6cb39e996b231d3ceab514c2ce605eec152adbb0c64841a85fce3,PodSandboxId:0ebb4eae3e66091366c1a2c7919095a9c11ca345d076250459751ba7d04ed968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871110072516812,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b75c462fb57f6fca71bdbd88dc4e157f8f9b18582ae2acd1027d8c57a55554,PodSandboxId:21f494ed38130dd0a2f327b2bb9b00d922c1d889149bc6f0756b9cea5b13fca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871110052217362,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdc3434602c81ab716edc86ab6b373f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1caaaa9c87352d4dd907694515d0359d7400a29dd5906a2818cf2701de42ffb5,PodSandboxId:b57a3588282b93b88f8b9d3df3776c08df62a86d4f582af445427c6bb37de918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871110000450286,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791c5d07cde00f129109d7f097fa6c8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f7491b6b7f7ebbbbbe0336334918ae99736d6384c7ca8034015683366c74a3,PodSandboxId:ba465b6942cf4804f0c0d206188e77fd77baabf99d6cb17dcdbb0d06649157c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870824133680588,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8b6a2650-4efd-42a3-9e38-e5c6f43d00bd name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.513840935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e5cd4059-ef6e-40df-a393-8caa72b30e03 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.514078386Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e5cd4059-ef6e-40df-a393-8caa72b30e03 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.516328130Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fb421e25-a126-4a19-8662-f37ef62734bd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.516730653Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872188516707040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fb421e25-a126-4a19-8662-f37ef62734bd name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.517332741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f5f40ac-773c-4e8f-9d3e-458b796cab90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.517403229Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f5f40ac-773c-4e8f-9d3e-458b796cab90 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:43:08 default-k8s-diff-port-799783 crio[718]: time="2024-09-20 22:43:08.517621834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:52423b7dc42cb47e4e10213716656ff74ac116d4d366261d104f2b943e25563e,PodSandboxId:e3eff4d2c7e833ce99811d19863712dc9218fa5c7da4df113915eac70a30ff0d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122783209017,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-lgvzf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58852f70-36e8-4972-8a6f-b04d8fc9d9f3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dde9daf56a3dc9754f580c4c18d7b55c4119f2233bf60e39b0ded71b7445a6f,PodSandboxId:89b8eb6393180386d1068612123c3527aff06837e994a95acac08e206875d436,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871122744407520,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-dpwmm,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: 6bd8b8dc-b176-4d2a-affc-63e02eb9fba3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aad9d02a56da37f2e49c89945e9bf78fdfacce940fa2696eecd3766aea79dfc9,PodSandboxId:33c77c1e886c9418d62fd884d2087bdab7773df0cb69188a588f1e7b924b119f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,St
ate:CONTAINER_RUNNING,CreatedAt:1726871121973030444,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6149617c-dd2c-4ac4-ad1b-80f4943d966f,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c7763d880d6ff2c443b413ce67006c4b428790b605c2701d6a0b4e56ad863a29,PodSandboxId:c9eca2c35d9a24badbe9e21a37be61baaaf51002c2e16c94c7f3312135abee40,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING
,CreatedAt:1726871121226913532,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7j48,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe6b074-a959-4f74-a981-e2303126c3ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3dedf7017026a82096fb601bd58964a90661461f66edc0d9182c4c565be292a8,PodSandboxId:6cbd7d5eb20b847c4f1bd9ab3ba3d56ce8c6966ff7be4bb52ef63a7338c53fb3,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871110057623483,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e058f700d55f306a119fa5fc03f0b47b,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:129346b80ec6cb39e996b231d3ceab514c2ce605eec152adbb0c64841a85fce3,PodSandboxId:0ebb4eae3e66091366c1a2c7919095a9c11ca345d076250459751ba7d04ed968,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871110072516812,Labels:map[string]string{io.ku
bernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8b75c462fb57f6fca71bdbd88dc4e157f8f9b18582ae2acd1027d8c57a55554,PodSandboxId:21f494ed38130dd0a2f327b2bb9b00d922c1d889149bc6f0756b9cea5b13fca5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871110052217362,Labels:map[string]strin
g{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cdc3434602c81ab716edc86ab6b373f,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1caaaa9c87352d4dd907694515d0359d7400a29dd5906a2818cf2701de42ffb5,PodSandboxId:b57a3588282b93b88f8b9d3df3776c08df62a86d4f582af445427c6bb37de918,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871110000450286,Labels:
map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 791c5d07cde00f129109d7f097fa6c8d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1f7491b6b7f7ebbbbbe0336334918ae99736d6384c7ca8034015683366c74a3,PodSandboxId:ba465b6942cf4804f0c0d206188e77fd77baabf99d6cb17dcdbb0d06649157c0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870824133680588,Labels:map
[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-799783,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebbf246f93c95e2a5453c35b524e0042,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f5f40ac-773c-4e8f-9d3e-458b796cab90 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	52423b7dc42cb       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   e3eff4d2c7e83       coredns-7c65d6cfc9-lgvzf
	5dde9daf56a3d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   17 minutes ago      Running             coredns                   0                   89b8eb6393180       coredns-7c65d6cfc9-dpwmm
	aad9d02a56da3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   17 minutes ago      Running             storage-provisioner       0                   33c77c1e886c9       storage-provisioner
	c7763d880d6ff       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   17 minutes ago      Running             kube-proxy                0                   c9eca2c35d9a2       kube-proxy-h7j48
	129346b80ec6c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   17 minutes ago      Running             kube-apiserver            2                   0ebb4eae3e660       kube-apiserver-default-k8s-diff-port-799783
	3dedf7017026a       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   17 minutes ago      Running             etcd                      2                   6cbd7d5eb20b8       etcd-default-k8s-diff-port-799783
	c8b75c462fb57       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   17 minutes ago      Running             kube-controller-manager   2                   21f494ed38130       kube-controller-manager-default-k8s-diff-port-799783
	1caaaa9c87352       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   17 minutes ago      Running             kube-scheduler            2                   b57a3588282b9       kube-scheduler-default-k8s-diff-port-799783
	f1f7491b6b7f7       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   22 minutes ago      Exited              kube-apiserver            1                   ba465b6942cf4       kube-apiserver-default-k8s-diff-port-799783
	
	
	==> coredns [52423b7dc42cb47e4e10213716656ff74ac116d4d366261d104f2b943e25563e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [5dde9daf56a3dc9754f580c4c18d7b55c4119f2233bf60e39b0ded71b7445a6f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-799783
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-799783
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=default-k8s-diff-port-799783
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T22_25_16_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 22:25:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-799783
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 22:43:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 22:40:43 +0000   Fri, 20 Sep 2024 22:25:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 22:40:43 +0000   Fri, 20 Sep 2024 22:25:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 22:40:43 +0000   Fri, 20 Sep 2024 22:25:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 22:40:43 +0000   Fri, 20 Sep 2024 22:25:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    default-k8s-diff-port-799783
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 73f2972cc5fc48a898fc672ebc64ecf5
	  System UUID:                73f2972c-c5fc-48a8-98fc-672ebc64ecf5
	  Boot ID:                    da1513e0-2a24-4ada-a28c-8fff832983b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dpwmm                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 coredns-7c65d6cfc9-lgvzf                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     17m
	  kube-system                 etcd-default-k8s-diff-port-799783                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         17m
	  kube-system                 kube-apiserver-default-k8s-diff-port-799783             250m (12%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-799783    200m (10%)    0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-proxy-h7j48                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 kube-scheduler-default-k8s-diff-port-799783             100m (5%)     0 (0%)      0 (0%)           0 (0%)         17m
	  kube-system                 metrics-server-6867b74b74-pjcvw                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         17m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-799783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node default-k8s-diff-port-799783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x7 over 17m)  kubelet          Node default-k8s-diff-port-799783 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  17m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  17m                kubelet          Node default-k8s-diff-port-799783 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m                kubelet          Node default-k8s-diff-port-799783 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m                kubelet          Node default-k8s-diff-port-799783 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           17m                node-controller  Node default-k8s-diff-port-799783 event: Registered Node default-k8s-diff-port-799783 in Controller
	
	
	==> dmesg <==
	[  +0.051554] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042535] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep20 22:20] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.546359] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.674260] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000036] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.979151] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.057976] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.068349] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.247404] systemd-fstab-generator[668]: Ignoring "noauto" option for root device
	[  +0.136620] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[  +0.324904] systemd-fstab-generator[709]: Ignoring "noauto" option for root device
	[  +4.305890] systemd-fstab-generator[799]: Ignoring "noauto" option for root device
	[  +0.062195] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.410550] systemd-fstab-generator[918]: Ignoring "noauto" option for root device
	[  +5.613240] kauditd_printk_skb: 97 callbacks suppressed
	[  +5.766699] kauditd_printk_skb: 85 callbacks suppressed
	[Sep20 22:25] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.801126] systemd-fstab-generator[2545]: Ignoring "noauto" option for root device
	[  +4.465228] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.589528] systemd-fstab-generator[2872]: Ignoring "noauto" option for root device
	[  +5.501654] systemd-fstab-generator[3000]: Ignoring "noauto" option for root device
	[  +0.087052] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.124266] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [3dedf7017026a82096fb601bd58964a90661461f66edc0d9182c4c565be292a8] <==
	{"level":"info","ts":"2024-09-20T22:25:10.990463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T22:25:10.990469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f received MsgVoteResp from c23cd90330b5fc4f at term 2"}
	{"level":"info","ts":"2024-09-20T22:25:10.990478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c23cd90330b5fc4f became leader at term 2"}
	{"level":"info","ts":"2024-09-20T22:25:10.990485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c23cd90330b5fc4f elected leader c23cd90330b5fc4f at term 2"}
	{"level":"info","ts":"2024-09-20T22:25:10.995362Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:25:10.998487Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c23cd90330b5fc4f","local-member-attributes":"{Name:default-k8s-diff-port-799783 ClientURLs:[https://192.168.39.94:2379]}","request-path":"/0/members/c23cd90330b5fc4f/attributes","cluster-id":"f81fab91992620a9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T22:25:11.000308Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:25:11.000829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:25:11.001375Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f81fab91992620a9","local-member-id":"c23cd90330b5fc4f","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:25:11.001475Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:25:11.001517Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:25:11.002091Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:25:11.004975Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T22:25:11.009322Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T22:25:11.009359Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T22:25:11.009921Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:25:11.014750Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.94:2379"}
	{"level":"info","ts":"2024-09-20T22:35:11.084119Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":724}
	{"level":"info","ts":"2024-09-20T22:35:11.094759Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":724,"took":"9.757775ms","hash":2226393720,"current-db-size-bytes":2367488,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2367488,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-09-20T22:35:11.094925Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2226393720,"revision":724,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T22:40:11.092443Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":967}
	{"level":"info","ts":"2024-09-20T22:40:11.097397Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":967,"took":"3.964492ms","hash":2312552015,"current-db-size-bytes":2367488,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":1658880,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-09-20T22:40:11.097501Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2312552015,"revision":967,"compact-revision":724}
	{"level":"info","ts":"2024-09-20T22:42:18.110384Z","caller":"traceutil/trace.go:171","msg":"trace[838945781] transaction","detail":"{read_only:false; response_revision:1317; number_of_response:1; }","duration":"130.616803ms","start":"2024-09-20T22:42:17.979730Z","end":"2024-09-20T22:42:18.110346Z","steps":["trace[838945781] 'process raft request'  (duration: 130.419659ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:43:05.586579Z","caller":"traceutil/trace.go:171","msg":"trace[1661452217] transaction","detail":"{read_only:false; response_revision:1354; number_of_response:1; }","duration":"169.076584ms","start":"2024-09-20T22:43:05.417470Z","end":"2024-09-20T22:43:05.586547Z","steps":["trace[1661452217] 'process raft request'  (duration: 168.524208ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:43:08 up 23 min,  0 users,  load average: 0.17, 0.07, 0.08
	Linux default-k8s-diff-port-799783 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [129346b80ec6cb39e996b231d3ceab514c2ce605eec152adbb0c64841a85fce3] <==
	I0920 22:38:13.800521       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:38:13.800551       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 22:40:12.798673       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:40:12.798797       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0920 22:40:13.800541       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:40:13.800619       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 22:40:13.800680       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:40:13.800792       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 22:40:13.801892       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:40:13.801954       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 22:41:13.802756       1 handler_proxy.go:99] no RequestInfo found in the context
	W0920 22:41:13.802799       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:41:13.803149       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0920 22:41:13.803203       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 22:41:13.805051       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:41:13.805114       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [f1f7491b6b7f7ebbbbbe0336334918ae99736d6384c7ca8034015683366c74a3] <==
	W0920 22:25:04.164860       1 logging.go:55] [core] [Channel #139 SubChannel #140]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.174588       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.196470       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.229194       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.259584       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.263191       1 logging.go:55] [core] [Channel #70 SubChannel #71]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.327823       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.353876       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.355431       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.405647       1 logging.go:55] [core] [Channel #160 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.434443       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.533937       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.545569       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.565102       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.568660       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.592796       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.596402       1 logging.go:55] [core] [Channel #64 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.662757       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.832952       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.871406       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:04.959558       1 logging.go:55] [core] [Channel #49 SubChannel #50]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:05.074077       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:05.105516       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:05.278845       1 logging.go:55] [core] [Channel #127 SubChannel #128]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:05.397348       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [c8b75c462fb57f6fca71bdbd88dc4e157f8f9b18582ae2acd1027d8c57a55554] <==
	E0920 22:37:49.897662       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:37:50.381527       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:38:19.904181       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:38:20.389892       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:38:49.911202       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:38:50.397958       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:39:19.917861       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:39:20.405950       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:39:49.924952       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:39:50.414835       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:40:19.932664       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:40:20.422967       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:40:43.849138       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-799783"
	E0920 22:40:49.939621       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:40:50.431539       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:41:16.327562       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="342.625µs"
	E0920 22:41:19.950830       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:41:20.440758       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:41:30.323943       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="98.798µs"
	E0920 22:41:49.957422       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:41:50.448385       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:42:19.965968       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:42:20.457526       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:42:49.972602       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:42:50.474460       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [c7763d880d6ff2c443b413ce67006c4b428790b605c2701d6a0b4e56ad863a29] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 22:25:21.608877       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 22:25:21.623982       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.94"]
	E0920 22:25:21.624128       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 22:25:21.711978       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 22:25:21.712101       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 22:25:21.712195       1 server_linux.go:169] "Using iptables Proxier"
	I0920 22:25:21.726537       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 22:25:21.726890       1 server.go:483] "Version info" version="v1.31.1"
	I0920 22:25:21.726924       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 22:25:21.760544       1 config.go:199] "Starting service config controller"
	I0920 22:25:21.760799       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 22:25:21.760919       1 config.go:105] "Starting endpoint slice config controller"
	I0920 22:25:21.760982       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 22:25:21.763644       1 config.go:328] "Starting node config controller"
	I0920 22:25:21.763743       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 22:25:21.862548       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 22:25:21.862610       1 shared_informer.go:320] Caches are synced for service config
	I0920 22:25:21.864064       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1caaaa9c87352d4dd907694515d0359d7400a29dd5906a2818cf2701de42ffb5] <==
	W0920 22:25:12.827129       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 22:25:12.827157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:12.827216       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 22:25:12.827303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:12.827382       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 22:25:12.827411       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:13.723441       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 22:25:13.724193       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 22:25:13.735542       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 22:25:13.735607       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:13.745165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 22:25:13.745437       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:13.764543       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 22:25:13.764681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:13.782097       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 22:25:13.782413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:13.893464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 22:25:13.893584       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:13.970032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 22:25:13.970166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:14.059893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 22:25:14.060458       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:25:14.096233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 22:25:14.096413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0920 22:25:15.508557       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 22:41:57 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:41:57.313693    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pjcvw" podUID="0b1b708c-664e-4d6c-8da2-79941793d8d9"
	Sep 20 22:42:05 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:05.602166    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872125601694629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:42:05 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:05.602607    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872125601694629,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:42:09 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:09.307854    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pjcvw" podUID="0b1b708c-664e-4d6c-8da2-79941793d8d9"
	Sep 20 22:42:15 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:15.357520    2879 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 22:42:15 default-k8s-diff-port-799783 kubelet[2879]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 22:42:15 default-k8s-diff-port-799783 kubelet[2879]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 22:42:15 default-k8s-diff-port-799783 kubelet[2879]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 22:42:15 default-k8s-diff-port-799783 kubelet[2879]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 22:42:15 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:15.604133    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872135603800776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:42:15 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:15.604174    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872135603800776,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:42:23 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:23.312570    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pjcvw" podUID="0b1b708c-664e-4d6c-8da2-79941793d8d9"
	Sep 20 22:42:25 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:25.608621    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872145608126708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:42:25 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:25.608981    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872145608126708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:42:35 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:35.611108    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872155610675488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:42:35 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:35.611624    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872155610675488,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:42:36 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:36.306698    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pjcvw" podUID="0b1b708c-664e-4d6c-8da2-79941793d8d9"
	Sep 20 22:42:45 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:45.616430    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872165615929586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:42:45 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:45.616470    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872165615929586,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:42:51 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:51.308057    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pjcvw" podUID="0b1b708c-664e-4d6c-8da2-79941793d8d9"
	Sep 20 22:42:55 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:55.618833    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872175618443859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:42:55 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:42:55.619451    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872175618443859,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:43:04 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:43:04.307038    2879 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-pjcvw" podUID="0b1b708c-664e-4d6c-8da2-79941793d8d9"
	Sep 20 22:43:05 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:43:05.622404    2879 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872185621221012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:43:05 default-k8s-diff-port-799783 kubelet[2879]: E0920 22:43:05.622466    2879 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872185621221012,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [aad9d02a56da37f2e49c89945e9bf78fdfacce940fa2696eecd3766aea79dfc9] <==
	I0920 22:25:22.086992       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 22:25:22.099166       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 22:25:22.099208       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 22:25:22.120073       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 22:25:22.122015       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-799783_65ee3fb0-9d76-4dad-943e-714947da403f!
	I0920 22:25:22.126965       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"65cc7b7b-451c-4264-9d63-260b4afb2ebe", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-799783_65ee3fb0-9d76-4dad-943e-714947da403f became leader
	I0920 22:25:22.225321       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-799783_65ee3fb0-9d76-4dad-943e-714947da403f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-799783 -n default-k8s-diff-port-799783
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-799783 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-pjcvw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-799783 describe pod metrics-server-6867b74b74-pjcvw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-799783 describe pod metrics-server-6867b74b74-pjcvw: exit status 1 (71.571136ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-pjcvw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-799783 describe pod metrics-server-6867b74b74-pjcvw: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (514.09s)
E0920 22:44:44.581028   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:44:52.213743   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:44:59.221969   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (334.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-032884 -n no-preload-032884
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-09-20 22:40:56.908593051 +0000 UTC m=+6797.039728019
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-032884 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-032884 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.533µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-032884 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-032884 -n no-preload-032884
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-032884 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-032884 logs -n 25: (1.231069699s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-696728                           | kubernetes-upgrade-696728    | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:11 UTC |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-032884             | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-440582 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | disable-driver-mounts-440582                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:14 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-562750            | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799783  | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-032884                  | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-593412        | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-562750                 | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC | 20 Sep 24 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799783       | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:25 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-593412             | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC | 20 Sep 24 22:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:40 UTC | 20 Sep 24 22:40 UTC |
	| start   | -p newest-cni-402676 --memory=2200 --alsologtostderr   | newest-cni-402676            | jenkins | v1.34.0 | 20 Sep 24 22:40 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 22:40:36
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 22:40:36.010571   71805 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:40:36.010795   71805 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:40:36.010804   71805 out.go:358] Setting ErrFile to fd 2...
	I0920 22:40:36.010808   71805 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:40:36.010984   71805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:40:36.011563   71805 out.go:352] Setting JSON to false
	I0920 22:40:36.012496   71805 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":8584,"bootTime":1726863452,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:40:36.012590   71805 start.go:139] virtualization: kvm guest
	I0920 22:40:36.015031   71805 out.go:177] * [newest-cni-402676] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:40:36.016996   71805 notify.go:220] Checking for updates...
	I0920 22:40:36.017074   71805 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:40:36.018672   71805 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:40:36.020316   71805 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:40:36.021900   71805 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:40:36.023372   71805 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:40:36.024910   71805 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:40:36.026815   71805 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:40:36.026909   71805 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:40:36.027032   71805 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:40:36.027124   71805 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:40:36.068802   71805 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 22:40:36.070050   71805 start.go:297] selected driver: kvm2
	I0920 22:40:36.070069   71805 start.go:901] validating driver "kvm2" against <nil>
	I0920 22:40:36.070088   71805 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:40:36.071123   71805 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:40:36.071213   71805 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 22:40:36.088697   71805 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 22:40:36.088756   71805 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0920 22:40:36.088808   71805 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0920 22:40:36.089038   71805 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0920 22:40:36.089073   71805 cni.go:84] Creating CNI manager for ""
	I0920 22:40:36.089128   71805 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:40:36.089144   71805 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 22:40:36.089199   71805 start.go:340] cluster config:
	{Name:newest-cni-402676 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-402676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:40:36.089330   71805 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:40:36.091047   71805 out.go:177] * Starting "newest-cni-402676" primary control-plane node in "newest-cni-402676" cluster
	I0920 22:40:36.092259   71805 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:40:36.092313   71805 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 22:40:36.092326   71805 cache.go:56] Caching tarball of preloaded images
	I0920 22:40:36.092405   71805 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 22:40:36.092419   71805 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0920 22:40:36.092527   71805 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/config.json ...
	I0920 22:40:36.092550   71805 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/newest-cni-402676/config.json: {Name:mk2632165b582b9eb46bacfc8fc8b8347d860541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:40:36.092720   71805 start.go:360] acquireMachinesLock for newest-cni-402676: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:40:36.092762   71805 start.go:364] duration metric: took 25.88µs to acquireMachinesLock for "newest-cni-402676"
	I0920 22:40:36.092786   71805 start.go:93] Provisioning new machine with config: &{Name:newest-cni-402676 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-402676 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:40:36.092866   71805 start.go:125] createHost starting for "" (driver="kvm2")
	I0920 22:40:36.094465   71805 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0920 22:40:36.094606   71805 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:40:36.094648   71805 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:40:36.109665   71805 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33233
	I0920 22:40:36.110155   71805 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:40:36.110740   71805 main.go:141] libmachine: Using API Version  1
	I0920 22:40:36.110763   71805 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:40:36.111136   71805 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:40:36.111366   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetMachineName
	I0920 22:40:36.111506   71805 main.go:141] libmachine: (newest-cni-402676) Calling .DriverName
	I0920 22:40:36.111660   71805 start.go:159] libmachine.API.Create for "newest-cni-402676" (driver="kvm2")
	I0920 22:40:36.111698   71805 client.go:168] LocalClient.Create starting
	I0920 22:40:36.111725   71805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem
	I0920 22:40:36.111760   71805 main.go:141] libmachine: Decoding PEM data...
	I0920 22:40:36.111774   71805 main.go:141] libmachine: Parsing certificate...
	I0920 22:40:36.111816   71805 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem
	I0920 22:40:36.111852   71805 main.go:141] libmachine: Decoding PEM data...
	I0920 22:40:36.111864   71805 main.go:141] libmachine: Parsing certificate...
	I0920 22:40:36.111878   71805 main.go:141] libmachine: Running pre-create checks...
	I0920 22:40:36.111886   71805 main.go:141] libmachine: (newest-cni-402676) Calling .PreCreateCheck
	I0920 22:40:36.112198   71805 main.go:141] libmachine: (newest-cni-402676) Calling .GetConfigRaw
	I0920 22:40:36.112584   71805 main.go:141] libmachine: Creating machine...
	I0920 22:40:36.112596   71805 main.go:141] libmachine: (newest-cni-402676) Calling .Create
	I0920 22:40:36.112726   71805 main.go:141] libmachine: (newest-cni-402676) Creating KVM machine...
	I0920 22:40:36.114022   71805 main.go:141] libmachine: (newest-cni-402676) DBG | found existing default KVM network
	I0920 22:40:36.115311   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:36.115163   71828 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:64:2c:33} reservation:<nil>}
	I0920 22:40:36.116034   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:36.115964   71828 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:06:9c:de} reservation:<nil>}
	I0920 22:40:36.117155   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:36.117082   71828 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000289210}
	I0920 22:40:36.117183   71805 main.go:141] libmachine: (newest-cni-402676) DBG | created network xml: 
	I0920 22:40:36.117193   71805 main.go:141] libmachine: (newest-cni-402676) DBG | <network>
	I0920 22:40:36.117205   71805 main.go:141] libmachine: (newest-cni-402676) DBG |   <name>mk-newest-cni-402676</name>
	I0920 22:40:36.117215   71805 main.go:141] libmachine: (newest-cni-402676) DBG |   <dns enable='no'/>
	I0920 22:40:36.117229   71805 main.go:141] libmachine: (newest-cni-402676) DBG |   
	I0920 22:40:36.117242   71805 main.go:141] libmachine: (newest-cni-402676) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I0920 22:40:36.117253   71805 main.go:141] libmachine: (newest-cni-402676) DBG |     <dhcp>
	I0920 22:40:36.117264   71805 main.go:141] libmachine: (newest-cni-402676) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I0920 22:40:36.117273   71805 main.go:141] libmachine: (newest-cni-402676) DBG |     </dhcp>
	I0920 22:40:36.117281   71805 main.go:141] libmachine: (newest-cni-402676) DBG |   </ip>
	I0920 22:40:36.117290   71805 main.go:141] libmachine: (newest-cni-402676) DBG |   
	I0920 22:40:36.117298   71805 main.go:141] libmachine: (newest-cni-402676) DBG | </network>
	I0920 22:40:36.117311   71805 main.go:141] libmachine: (newest-cni-402676) DBG | 
	I0920 22:40:36.123111   71805 main.go:141] libmachine: (newest-cni-402676) DBG | trying to create private KVM network mk-newest-cni-402676 192.168.61.0/24...
	I0920 22:40:36.196117   71805 main.go:141] libmachine: (newest-cni-402676) DBG | private KVM network mk-newest-cni-402676 192.168.61.0/24 created
	I0920 22:40:36.196173   71805 main.go:141] libmachine: (newest-cni-402676) Setting up store path in /home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676 ...
	I0920 22:40:36.196197   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:36.196077   71828 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:40:36.196282   71805 main.go:141] libmachine: (newest-cni-402676) Building disk image from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 22:40:36.196318   71805 main.go:141] libmachine: (newest-cni-402676) Downloading /home/jenkins/minikube-integration/19672-9614/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso...
	I0920 22:40:36.450777   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:36.450627   71828 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676/id_rsa...
	I0920 22:40:36.658017   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:36.657875   71828 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676/newest-cni-402676.rawdisk...
	I0920 22:40:36.658044   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Writing magic tar header
	I0920 22:40:36.658057   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Writing SSH key tar header
	I0920 22:40:36.658064   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:36.658009   71828 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676 ...
	I0920 22:40:36.658149   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676
	I0920 22:40:36.658166   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube/machines
	I0920 22:40:36.658174   71805 main.go:141] libmachine: (newest-cni-402676) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676 (perms=drwx------)
	I0920 22:40:36.658185   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:40:36.658195   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19672-9614
	I0920 22:40:36.658203   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0920 22:40:36.658212   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Checking permissions on dir: /home/jenkins
	I0920 22:40:36.658222   71805 main.go:141] libmachine: (newest-cni-402676) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube/machines (perms=drwxr-xr-x)
	I0920 22:40:36.658229   71805 main.go:141] libmachine: (newest-cni-402676) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614/.minikube (perms=drwxr-xr-x)
	I0920 22:40:36.658238   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Checking permissions on dir: /home
	I0920 22:40:36.658252   71805 main.go:141] libmachine: (newest-cni-402676) DBG | Skipping /home - not owner
	I0920 22:40:36.658261   71805 main.go:141] libmachine: (newest-cni-402676) Setting executable bit set on /home/jenkins/minikube-integration/19672-9614 (perms=drwxrwxr-x)
	I0920 22:40:36.658335   71805 main.go:141] libmachine: (newest-cni-402676) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0920 22:40:36.658361   71805 main.go:141] libmachine: (newest-cni-402676) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0920 22:40:36.658379   71805 main.go:141] libmachine: (newest-cni-402676) Creating domain...
	I0920 22:40:36.659786   71805 main.go:141] libmachine: (newest-cni-402676) define libvirt domain using xml: 
	I0920 22:40:36.659810   71805 main.go:141] libmachine: (newest-cni-402676) <domain type='kvm'>
	I0920 22:40:36.659821   71805 main.go:141] libmachine: (newest-cni-402676)   <name>newest-cni-402676</name>
	I0920 22:40:36.659829   71805 main.go:141] libmachine: (newest-cni-402676)   <memory unit='MiB'>2200</memory>
	I0920 22:40:36.659837   71805 main.go:141] libmachine: (newest-cni-402676)   <vcpu>2</vcpu>
	I0920 22:40:36.659844   71805 main.go:141] libmachine: (newest-cni-402676)   <features>
	I0920 22:40:36.659851   71805 main.go:141] libmachine: (newest-cni-402676)     <acpi/>
	I0920 22:40:36.659873   71805 main.go:141] libmachine: (newest-cni-402676)     <apic/>
	I0920 22:40:36.659921   71805 main.go:141] libmachine: (newest-cni-402676)     <pae/>
	I0920 22:40:36.659946   71805 main.go:141] libmachine: (newest-cni-402676)     
	I0920 22:40:36.659956   71805 main.go:141] libmachine: (newest-cni-402676)   </features>
	I0920 22:40:36.659961   71805 main.go:141] libmachine: (newest-cni-402676)   <cpu mode='host-passthrough'>
	I0920 22:40:36.659966   71805 main.go:141] libmachine: (newest-cni-402676)   
	I0920 22:40:36.659973   71805 main.go:141] libmachine: (newest-cni-402676)   </cpu>
	I0920 22:40:36.659978   71805 main.go:141] libmachine: (newest-cni-402676)   <os>
	I0920 22:40:36.659986   71805 main.go:141] libmachine: (newest-cni-402676)     <type>hvm</type>
	I0920 22:40:36.659995   71805 main.go:141] libmachine: (newest-cni-402676)     <boot dev='cdrom'/>
	I0920 22:40:36.660006   71805 main.go:141] libmachine: (newest-cni-402676)     <boot dev='hd'/>
	I0920 22:40:36.660014   71805 main.go:141] libmachine: (newest-cni-402676)     <bootmenu enable='no'/>
	I0920 22:40:36.660025   71805 main.go:141] libmachine: (newest-cni-402676)   </os>
	I0920 22:40:36.660036   71805 main.go:141] libmachine: (newest-cni-402676)   <devices>
	I0920 22:40:36.660046   71805 main.go:141] libmachine: (newest-cni-402676)     <disk type='file' device='cdrom'>
	I0920 22:40:36.660059   71805 main.go:141] libmachine: (newest-cni-402676)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676/boot2docker.iso'/>
	I0920 22:40:36.660070   71805 main.go:141] libmachine: (newest-cni-402676)       <target dev='hdc' bus='scsi'/>
	I0920 22:40:36.660081   71805 main.go:141] libmachine: (newest-cni-402676)       <readonly/>
	I0920 22:40:36.660090   71805 main.go:141] libmachine: (newest-cni-402676)     </disk>
	I0920 22:40:36.660114   71805 main.go:141] libmachine: (newest-cni-402676)     <disk type='file' device='disk'>
	I0920 22:40:36.660138   71805 main.go:141] libmachine: (newest-cni-402676)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0920 22:40:36.660152   71805 main.go:141] libmachine: (newest-cni-402676)       <source file='/home/jenkins/minikube-integration/19672-9614/.minikube/machines/newest-cni-402676/newest-cni-402676.rawdisk'/>
	I0920 22:40:36.660162   71805 main.go:141] libmachine: (newest-cni-402676)       <target dev='hda' bus='virtio'/>
	I0920 22:40:36.660169   71805 main.go:141] libmachine: (newest-cni-402676)     </disk>
	I0920 22:40:36.660178   71805 main.go:141] libmachine: (newest-cni-402676)     <interface type='network'>
	I0920 22:40:36.660187   71805 main.go:141] libmachine: (newest-cni-402676)       <source network='mk-newest-cni-402676'/>
	I0920 22:40:36.660195   71805 main.go:141] libmachine: (newest-cni-402676)       <model type='virtio'/>
	I0920 22:40:36.660203   71805 main.go:141] libmachine: (newest-cni-402676)     </interface>
	I0920 22:40:36.660220   71805 main.go:141] libmachine: (newest-cni-402676)     <interface type='network'>
	I0920 22:40:36.660243   71805 main.go:141] libmachine: (newest-cni-402676)       <source network='default'/>
	I0920 22:40:36.660264   71805 main.go:141] libmachine: (newest-cni-402676)       <model type='virtio'/>
	I0920 22:40:36.660299   71805 main.go:141] libmachine: (newest-cni-402676)     </interface>
	I0920 22:40:36.660323   71805 main.go:141] libmachine: (newest-cni-402676)     <serial type='pty'>
	I0920 22:40:36.660346   71805 main.go:141] libmachine: (newest-cni-402676)       <target port='0'/>
	I0920 22:40:36.660375   71805 main.go:141] libmachine: (newest-cni-402676)     </serial>
	I0920 22:40:36.660387   71805 main.go:141] libmachine: (newest-cni-402676)     <console type='pty'>
	I0920 22:40:36.660398   71805 main.go:141] libmachine: (newest-cni-402676)       <target type='serial' port='0'/>
	I0920 22:40:36.660408   71805 main.go:141] libmachine: (newest-cni-402676)     </console>
	I0920 22:40:36.660418   71805 main.go:141] libmachine: (newest-cni-402676)     <rng model='virtio'>
	I0920 22:40:36.660428   71805 main.go:141] libmachine: (newest-cni-402676)       <backend model='random'>/dev/random</backend>
	I0920 22:40:36.660435   71805 main.go:141] libmachine: (newest-cni-402676)     </rng>
	I0920 22:40:36.660451   71805 main.go:141] libmachine: (newest-cni-402676)     
	I0920 22:40:36.660468   71805 main.go:141] libmachine: (newest-cni-402676)     
	I0920 22:40:36.660476   71805 main.go:141] libmachine: (newest-cni-402676)   </devices>
	I0920 22:40:36.660485   71805 main.go:141] libmachine: (newest-cni-402676) </domain>
	I0920 22:40:36.660499   71805 main.go:141] libmachine: (newest-cni-402676) 
	I0920 22:40:36.665001   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:70:55:e5 in network default
	I0920 22:40:36.665597   71805 main.go:141] libmachine: (newest-cni-402676) Ensuring networks are active...
	I0920 22:40:36.665617   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:36.666375   71805 main.go:141] libmachine: (newest-cni-402676) Ensuring network default is active
	I0920 22:40:36.666844   71805 main.go:141] libmachine: (newest-cni-402676) Ensuring network mk-newest-cni-402676 is active
	I0920 22:40:36.667400   71805 main.go:141] libmachine: (newest-cni-402676) Getting domain xml...
	I0920 22:40:36.668134   71805 main.go:141] libmachine: (newest-cni-402676) Creating domain...
	I0920 22:40:37.924467   71805 main.go:141] libmachine: (newest-cni-402676) Waiting to get IP...
	I0920 22:40:37.925172   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:37.925600   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:37.925625   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:37.925581   71828 retry.go:31] will retry after 220.746783ms: waiting for machine to come up
	I0920 22:40:38.148192   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:38.148702   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:38.148732   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:38.148666   71828 retry.go:31] will retry after 305.688246ms: waiting for machine to come up
	I0920 22:40:38.456437   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:38.456958   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:38.456990   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:38.456921   71828 retry.go:31] will retry after 298.223359ms: waiting for machine to come up
	I0920 22:40:38.757284   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:38.757890   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:38.757918   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:38.757819   71828 retry.go:31] will retry after 544.839758ms: waiting for machine to come up
	I0920 22:40:39.304479   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:39.305020   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:39.305046   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:39.304971   71828 retry.go:31] will retry after 603.173435ms: waiting for machine to come up
	I0920 22:40:39.909771   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:39.910143   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:39.910160   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:39.910103   71828 retry.go:31] will retry after 647.974696ms: waiting for machine to come up
	I0920 22:40:40.560275   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:40.560810   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:40.560838   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:40.560738   71828 retry.go:31] will retry after 1.052311003s: waiting for machine to come up
	I0920 22:40:41.614165   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:41.614615   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:41.614644   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:41.614569   71828 retry.go:31] will retry after 1.135731993s: waiting for machine to come up
	I0920 22:40:42.751779   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:42.752265   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:42.752293   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:42.752219   71828 retry.go:31] will retry after 1.321474723s: waiting for machine to come up
	I0920 22:40:44.075848   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:44.076350   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:44.076377   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:44.076319   71828 retry.go:31] will retry after 2.284962937s: waiting for machine to come up
	I0920 22:40:46.363391   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:46.363931   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:46.363965   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:46.363877   71828 retry.go:31] will retry after 2.583613435s: waiting for machine to come up
	I0920 22:40:48.949276   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:48.949890   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:48.949930   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:48.949829   71828 retry.go:31] will retry after 2.41136753s: waiting for machine to come up
	I0920 22:40:51.363292   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:51.363877   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:51.363905   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:51.363829   71828 retry.go:31] will retry after 4.293526534s: waiting for machine to come up
	I0920 22:40:55.661552   71805 main.go:141] libmachine: (newest-cni-402676) DBG | domain newest-cni-402676 has defined MAC address 52:54:00:45:ed:7a in network mk-newest-cni-402676
	I0920 22:40:55.662023   71805 main.go:141] libmachine: (newest-cni-402676) DBG | unable to find current IP address of domain newest-cni-402676 in network mk-newest-cni-402676
	I0920 22:40:55.662051   71805 main.go:141] libmachine: (newest-cni-402676) DBG | I0920 22:40:55.661983   71828 retry.go:31] will retry after 3.984696326s: waiting for machine to come up
	
	
	==> CRI-O <==
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.489881002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872057489858584,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5359a1a3-1fc9-49a4-8552-3e2bdf59579d name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.490483269Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af282b31-51b5-4d13-9ef9-c2ce3ef45bf3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.490552985Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af282b31-51b5-4d13-9ef9-c2ce3ef45bf3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.490764308Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9995e2121841ea05690639d90ff5196cfa73946de17e659e8809a2a05a8a5c35,PodSandboxId:23e16e9df66296c5ae7597b34eb9b053d27ae5856f2393f994887156a465c468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726871172876795101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18366542-5571-4ff9-885b-f3aaa3106905,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aea640977127ddba993b9089d6c89888007862070aac3e40529aa958d2dc74,PodSandboxId:076c46c9214ea860a4c38d5c69e14c60553a3078ca209b15c8bcae0ce6d1ff5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172746982410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fbxtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e7b041-7437-4dec-bea2-4512c1c233d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6503e43d3b796dea31680a5898ee3a5e3d889896391a06ddd6f1e53929ceb2fc,PodSandboxId:ed2cc0a35d069564c778184700504dcff002508f09a395a0ae4db84ae07729fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172601213428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsx2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b
4bdbac-ba8e-434b-80b7-6ade0c7cc363,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f617df1f10f6841905181690e6328f1da61d35011100ece9c68f4f479beaa4a,PodSandboxId:b59c5341b5b8cbc4ce9792234a8988cc571e499d822768387c0196ef1b8c4c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726871171889303562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fdc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a28610e-aa20-4aed-a9a4-1e125f03445b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c9a8b2aa3d02b7d84442c2edbb6711e5b14529c5b9bf8fc1ce68c998a2a0a4f,PodSandboxId:a7c1809b0c827b3d05c34efdabccb9363b2921aee7ae8d60594341cf9b4ebe62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871160743753850,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10d6a76354cf2eed8f773b8fecfa0d4f6118a696e9c40dc38829ec0efba2ea0,PodSandboxId:46af1c368db0a142500f5418126fd5f292375d7fbdca44114d0e09548d86625c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871160769537466,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a5cc47b50e627199a483d7c3cabbda6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf9164666f590f04b50ff26c1658fb1eed6093ea79abd571d519a5bc26ff9c2,PodSandboxId:57252c167dbdb5e366725cea136ecc9215ed4b52e2fc6162b9e726dc2d404486,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871160749941975,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12455009d1a25bff0d3404c7d1145b3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c8d91a89410b46ef61f2674cbc417df4d854deb17113a89c6ac3fbc9c0e8f2,PodSandboxId:cb01b1f1caf4d0247f5a2c3f8fc8fdfcd8c64f4bbeee3b519d40408937c7ec03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871160728793630,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b660d87ee7f20af62b42753d807cfded,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476bba60185cc902d43166c05c4527e3fc81f14290b9da513ccfdd2d34439d84,PodSandboxId:225bc4fe6977e04c2f5dfa9f0b1e651ec15b6f1474dc99c89ff05e032d581f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870877906748866,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af282b31-51b5-4d13-9ef9-c2ce3ef45bf3 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.529713975Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36e1ce64-39bb-46ec-976e-b91cf00b027a name=/runtime.v1.RuntimeService/Version
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.529808736Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36e1ce64-39bb-46ec-976e-b91cf00b027a name=/runtime.v1.RuntimeService/Version
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.530722749Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1f487c87-36c0-4677-b100-98045ac28443 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.531053582Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872057531032212,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1f487c87-36c0-4677-b100-98045ac28443 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.531559249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c603ddcf-0015-46a6-bd60-dbc36745d830 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.531634834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c603ddcf-0015-46a6-bd60-dbc36745d830 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.531823767Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9995e2121841ea05690639d90ff5196cfa73946de17e659e8809a2a05a8a5c35,PodSandboxId:23e16e9df66296c5ae7597b34eb9b053d27ae5856f2393f994887156a465c468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726871172876795101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18366542-5571-4ff9-885b-f3aaa3106905,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aea640977127ddba993b9089d6c89888007862070aac3e40529aa958d2dc74,PodSandboxId:076c46c9214ea860a4c38d5c69e14c60553a3078ca209b15c8bcae0ce6d1ff5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172746982410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fbxtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e7b041-7437-4dec-bea2-4512c1c233d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6503e43d3b796dea31680a5898ee3a5e3d889896391a06ddd6f1e53929ceb2fc,PodSandboxId:ed2cc0a35d069564c778184700504dcff002508f09a395a0ae4db84ae07729fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172601213428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsx2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b
4bdbac-ba8e-434b-80b7-6ade0c7cc363,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f617df1f10f6841905181690e6328f1da61d35011100ece9c68f4f479beaa4a,PodSandboxId:b59c5341b5b8cbc4ce9792234a8988cc571e499d822768387c0196ef1b8c4c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726871171889303562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fdc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a28610e-aa20-4aed-a9a4-1e125f03445b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c9a8b2aa3d02b7d84442c2edbb6711e5b14529c5b9bf8fc1ce68c998a2a0a4f,PodSandboxId:a7c1809b0c827b3d05c34efdabccb9363b2921aee7ae8d60594341cf9b4ebe62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871160743753850,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10d6a76354cf2eed8f773b8fecfa0d4f6118a696e9c40dc38829ec0efba2ea0,PodSandboxId:46af1c368db0a142500f5418126fd5f292375d7fbdca44114d0e09548d86625c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871160769537466,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a5cc47b50e627199a483d7c3cabbda6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf9164666f590f04b50ff26c1658fb1eed6093ea79abd571d519a5bc26ff9c2,PodSandboxId:57252c167dbdb5e366725cea136ecc9215ed4b52e2fc6162b9e726dc2d404486,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871160749941975,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12455009d1a25bff0d3404c7d1145b3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c8d91a89410b46ef61f2674cbc417df4d854deb17113a89c6ac3fbc9c0e8f2,PodSandboxId:cb01b1f1caf4d0247f5a2c3f8fc8fdfcd8c64f4bbeee3b519d40408937c7ec03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871160728793630,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b660d87ee7f20af62b42753d807cfded,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476bba60185cc902d43166c05c4527e3fc81f14290b9da513ccfdd2d34439d84,PodSandboxId:225bc4fe6977e04c2f5dfa9f0b1e651ec15b6f1474dc99c89ff05e032d581f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870877906748866,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c603ddcf-0015-46a6-bd60-dbc36745d830 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.573239667Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea86203b-b55b-4b64-bdff-7650e79093d2 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.573355582Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea86203b-b55b-4b64-bdff-7650e79093d2 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.575477705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=df50901e-5492-4982-aee5-d68bdbfe27d3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.575964780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872057575933151,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=df50901e-5492-4982-aee5-d68bdbfe27d3 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.576700013Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=421313e9-e0fa-494f-ac25-46195e6a4c7c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.576771473Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=421313e9-e0fa-494f-ac25-46195e6a4c7c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.577025079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9995e2121841ea05690639d90ff5196cfa73946de17e659e8809a2a05a8a5c35,PodSandboxId:23e16e9df66296c5ae7597b34eb9b053d27ae5856f2393f994887156a465c468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726871172876795101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18366542-5571-4ff9-885b-f3aaa3106905,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aea640977127ddba993b9089d6c89888007862070aac3e40529aa958d2dc74,PodSandboxId:076c46c9214ea860a4c38d5c69e14c60553a3078ca209b15c8bcae0ce6d1ff5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172746982410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fbxtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e7b041-7437-4dec-bea2-4512c1c233d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6503e43d3b796dea31680a5898ee3a5e3d889896391a06ddd6f1e53929ceb2fc,PodSandboxId:ed2cc0a35d069564c778184700504dcff002508f09a395a0ae4db84ae07729fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172601213428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsx2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b
4bdbac-ba8e-434b-80b7-6ade0c7cc363,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f617df1f10f6841905181690e6328f1da61d35011100ece9c68f4f479beaa4a,PodSandboxId:b59c5341b5b8cbc4ce9792234a8988cc571e499d822768387c0196ef1b8c4c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726871171889303562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fdc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a28610e-aa20-4aed-a9a4-1e125f03445b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c9a8b2aa3d02b7d84442c2edbb6711e5b14529c5b9bf8fc1ce68c998a2a0a4f,PodSandboxId:a7c1809b0c827b3d05c34efdabccb9363b2921aee7ae8d60594341cf9b4ebe62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871160743753850,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10d6a76354cf2eed8f773b8fecfa0d4f6118a696e9c40dc38829ec0efba2ea0,PodSandboxId:46af1c368db0a142500f5418126fd5f292375d7fbdca44114d0e09548d86625c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871160769537466,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a5cc47b50e627199a483d7c3cabbda6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf9164666f590f04b50ff26c1658fb1eed6093ea79abd571d519a5bc26ff9c2,PodSandboxId:57252c167dbdb5e366725cea136ecc9215ed4b52e2fc6162b9e726dc2d404486,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871160749941975,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12455009d1a25bff0d3404c7d1145b3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c8d91a89410b46ef61f2674cbc417df4d854deb17113a89c6ac3fbc9c0e8f2,PodSandboxId:cb01b1f1caf4d0247f5a2c3f8fc8fdfcd8c64f4bbeee3b519d40408937c7ec03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871160728793630,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b660d87ee7f20af62b42753d807cfded,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476bba60185cc902d43166c05c4527e3fc81f14290b9da513ccfdd2d34439d84,PodSandboxId:225bc4fe6977e04c2f5dfa9f0b1e651ec15b6f1474dc99c89ff05e032d581f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870877906748866,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=421313e9-e0fa-494f-ac25-46195e6a4c7c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.616522660Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fa945682-3a4a-47e2-b3cd-c51f5f6db8e6 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.616644705Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fa945682-3a4a-47e2-b3cd-c51f5f6db8e6 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.617787525Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c659848-5111-4338-b207-7d7598663dec name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.618267300Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872057618237305,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c659848-5111-4338-b207-7d7598663dec name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.618904460Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6742396c-01f1-4cfb-b13a-e34166a36b33 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.618995864Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6742396c-01f1-4cfb-b13a-e34166a36b33 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:57 no-preload-032884 crio[715]: time="2024-09-20 22:40:57.619267382Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9995e2121841ea05690639d90ff5196cfa73946de17e659e8809a2a05a8a5c35,PodSandboxId:23e16e9df66296c5ae7597b34eb9b053d27ae5856f2393f994887156a465c468,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1726871172876795101,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18366542-5571-4ff9-885b-f3aaa3106905,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79aea640977127ddba993b9089d6c89888007862070aac3e40529aa958d2dc74,PodSandboxId:076c46c9214ea860a4c38d5c69e14c60553a3078ca209b15c8bcae0ce6d1ff5a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172746982410,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fbxtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30e7b041-7437-4dec-bea2-4512c1c233d8,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6503e43d3b796dea31680a5898ee3a5e3d889896391a06ddd6f1e53929ceb2fc,PodSandboxId:ed2cc0a35d069564c778184700504dcff002508f09a395a0ae4db84ae07729fc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1726871172601213428,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-jsx2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b
4bdbac-ba8e-434b-80b7-6ade0c7cc363,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f617df1f10f6841905181690e6328f1da61d35011100ece9c68f4f479beaa4a,PodSandboxId:b59c5341b5b8cbc4ce9792234a8988cc571e499d822768387c0196ef1b8c4c3c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1726871171889303562,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fdc7z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a28610e-aa20-4aed-a9a4-1e125f03445b,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1c9a8b2aa3d02b7d84442c2edbb6711e5b14529c5b9bf8fc1ce68c998a2a0a4f,PodSandboxId:a7c1809b0c827b3d05c34efdabccb9363b2921aee7ae8d60594341cf9b4ebe62,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1726871160743753850,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10d6a76354cf2eed8f773b8fecfa0d4f6118a696e9c40dc38829ec0efba2ea0,PodSandboxId:46af1c368db0a142500f5418126fd5f292375d7fbdca44114d0e09548d86625c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1726871160769537466,Labels:map[string]string{io.kube
rnetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a5cc47b50e627199a483d7c3cabbda6,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bbf9164666f590f04b50ff26c1658fb1eed6093ea79abd571d519a5bc26ff9c2,PodSandboxId:57252c167dbdb5e366725cea136ecc9215ed4b52e2fc6162b9e726dc2d404486,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1726871160749941975,Labels:map[string]string{io.kubernetes.container.name: k
ube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12455009d1a25bff0d3404c7d1145b3,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:92c8d91a89410b46ef61f2674cbc417df4d854deb17113a89c6ac3fbc9c0e8f2,PodSandboxId:cb01b1f1caf4d0247f5a2c3f8fc8fdfcd8c64f4bbeee3b519d40408937c7ec03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1726871160728793630,Labels:map[string]string{io.kubernetes.container.na
me: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b660d87ee7f20af62b42753d807cfded,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:476bba60185cc902d43166c05c4527e3fc81f14290b9da513ccfdd2d34439d84,PodSandboxId:225bc4fe6977e04c2f5dfa9f0b1e651ec15b6f1474dc99c89ff05e032d581f75,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1726870877906748866,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-032884,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8be240a8b5c399cf0d2955a268d46dd9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6742396c-01f1-4cfb-b13a-e34166a36b33 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9995e2121841e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   23e16e9df6629       storage-provisioner
	79aea64097712       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   076c46c9214ea       coredns-7c65d6cfc9-fbxtz
	6503e43d3b796       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   ed2cc0a35d069       coredns-7c65d6cfc9-jsx2x
	4f617df1f10f6       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   14 minutes ago      Running             kube-proxy                0                   b59c5341b5b8c       kube-proxy-fdc7z
	a10d6a76354cf       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   46af1c368db0a       etcd-no-preload-032884
	bbf9164666f59       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   14 minutes ago      Running             kube-controller-manager   2                   57252c167dbdb       kube-controller-manager-no-preload-032884
	1c9a8b2aa3d02       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Running             kube-apiserver            2                   a7c1809b0c827       kube-apiserver-no-preload-032884
	92c8d91a89410       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   14 minutes ago      Running             kube-scheduler            2                   cb01b1f1caf4d       kube-scheduler-no-preload-032884
	476bba60185cc       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   19 minutes ago      Exited              kube-apiserver            1                   225bc4fe6977e       kube-apiserver-no-preload-032884
	
	
	==> coredns [6503e43d3b796dea31680a5898ee3a5e3d889896391a06ddd6f1e53929ceb2fc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [79aea640977127ddba993b9089d6c89888007862070aac3e40529aa958d2dc74] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-032884
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-032884
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=no-preload-032884
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T22_26_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 22:26:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-032884
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 22:40:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 22:36:27 +0000   Fri, 20 Sep 2024 22:26:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 22:36:27 +0000   Fri, 20 Sep 2024 22:26:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 22:36:27 +0000   Fri, 20 Sep 2024 22:26:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 22:36:27 +0000   Fri, 20 Sep 2024 22:26:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.19
	  Hostname:    no-preload-032884
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5beecf3b2736436692806fb1417eaac1
	  System UUID:                5beecf3b-2736-4366-9280-6fb1417eaac1
	  Boot ID:                    b13194a4-96d9-4ea9-9c38-cdd85010f043
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-fbxtz                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-jsx2x                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-032884                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-032884             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-032884    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-fdc7z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-032884             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-xvbg9              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-032884 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-032884 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-032884 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-032884 event: Registered Node no-preload-032884 in Controller
	
	
	==> dmesg <==
	[  +0.058598] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042352] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.410916] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.680604] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.607542] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000066] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.046762] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.055364] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.054549] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +0.181850] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.148133] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +0.295182] systemd-fstab-generator[707]: Ignoring "noauto" option for root device
	[Sep20 22:21] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	[  +0.061425] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.714519] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +4.597016] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.169525] kauditd_printk_skb: 85 callbacks suppressed
	[Sep20 22:25] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.330505] systemd-fstab-generator[3010]: Ignoring "noauto" option for root device
	[Sep20 22:26] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.839127] systemd-fstab-generator[3331]: Ignoring "noauto" option for root device
	[  +4.906543] systemd-fstab-generator[3455]: Ignoring "noauto" option for root device
	[  +0.143676] kauditd_printk_skb: 14 callbacks suppressed
	[  +6.165920] kauditd_printk_skb: 84 callbacks suppressed
	
	
	==> etcd [a10d6a76354cf2eed8f773b8fecfa0d4f6118a696e9c40dc38829ec0efba2ea0] <==
	{"level":"info","ts":"2024-09-20T22:26:01.265895Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.19:2380"}
	{"level":"info","ts":"2024-09-20T22:26:01.265925Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.19:2380"}
	{"level":"info","ts":"2024-09-20T22:26:01.302514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16426d17e57aa155 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T22:26:01.302606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16426d17e57aa155 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T22:26:01.302637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16426d17e57aa155 received MsgPreVoteResp from 16426d17e57aa155 at term 1"}
	{"level":"info","ts":"2024-09-20T22:26:01.302659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16426d17e57aa155 became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T22:26:01.302664Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16426d17e57aa155 received MsgVoteResp from 16426d17e57aa155 at term 2"}
	{"level":"info","ts":"2024-09-20T22:26:01.302682Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"16426d17e57aa155 became leader at term 2"}
	{"level":"info","ts":"2024-09-20T22:26:01.302689Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 16426d17e57aa155 elected leader 16426d17e57aa155 at term 2"}
	{"level":"info","ts":"2024-09-20T22:26:01.306741Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:26:01.306985Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"16426d17e57aa155","local-member-attributes":"{Name:no-preload-032884 ClientURLs:[https://192.168.72.19:2379]}","request-path":"/0/members/16426d17e57aa155/attributes","cluster-id":"932de0d4c307fd64","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T22:26:01.309456Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:26:01.309495Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"932de0d4c307fd64","local-member-id":"16426d17e57aa155","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:26:01.309592Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:26:01.309637Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T22:26:01.309650Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T22:26:01.312673Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:26:01.313386Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.19:2379"}
	{"level":"info","ts":"2024-09-20T22:26:01.319491Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T22:26:01.319528Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T22:26:01.320097Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T22:26:01.324826Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T22:36:02.077150Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":683}
	{"level":"info","ts":"2024-09-20T22:36:02.088008Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":683,"took":"9.679984ms","hash":1808597588,"current-db-size-bytes":2183168,"current-db-size":"2.2 MB","current-db-size-in-use-bytes":2183168,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2024-09-20T22:36:02.088124Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1808597588,"revision":683,"compact-revision":-1}
	
	
	==> kernel <==
	 22:40:57 up 20 min,  0 users,  load average: 0.25, 0.20, 0.15
	Linux no-preload-032884 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1c9a8b2aa3d02b7d84442c2edbb6711e5b14529c5b9bf8fc1ce68c998a2a0a4f] <==
	E0920 22:36:04.644053       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0920 22:36:04.644155       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 22:36:04.645323       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:36:04.645391       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 22:37:04.645623       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:37:04.645704       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W0920 22:37:04.645640       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:37:04.645792       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0920 22:37:04.647046       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:37:04.647133       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0920 22:39:04.647376       1 handler_proxy.go:99] no RequestInfo found in the context
	W0920 22:39:04.647372       1 handler_proxy.go:99] no RequestInfo found in the context
	E0920 22:39:04.647676       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0920 22:39:04.647724       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0920 22:39:04.648923       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 22:39:04.648953       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [476bba60185cc902d43166c05c4527e3fc81f14290b9da513ccfdd2d34439d84] <==
	W0920 22:25:56.864274       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:56.895109       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:56.909114       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.110274       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.118183       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.146622       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.187836       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.203782       1 logging.go:55] [core] [Channel #40 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.214314       1 logging.go:55] [core] [Channel #91 SubChannel #92]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.222632       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.328065       1 logging.go:55] [core] [Channel #151 SubChannel #152]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.369285       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.370653       1 logging.go:55] [core] [Channel #67 SubChannel #68]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.384293       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.388790       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.466901       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.487873       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.519166       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.549141       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.550612       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.653600       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.676298       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.688343       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.697844       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0920 22:25:57.750374       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [bbf9164666f590f04b50ff26c1658fb1eed6093ea79abd571d519a5bc26ff9c2] <==
	E0920 22:35:40.594980       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:35:41.168378       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:36:10.602127       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:36:11.177745       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:36:27.352997       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-032884"
	E0920 22:36:40.611825       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:36:41.185782       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:37:10.619636       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:37:11.194712       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0920 22:37:15.275322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="185.852µs"
	I0920 22:37:28.273729       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="57.776µs"
	E0920 22:37:40.626641       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:37:41.204648       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:38:10.633842       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:38:11.213560       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:38:40.640250       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:38:41.221813       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:39:10.649183       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:39:11.230108       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:39:40.655659       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:39:41.238812       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:40:10.664673       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:40:11.246830       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0920 22:40:40.671948       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0920 22:40:41.257145       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [4f617df1f10f6841905181690e6328f1da61d35011100ece9c68f4f479beaa4a] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0920 22:26:12.896033       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0920 22:26:12.997043       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.19"]
	E0920 22:26:13.013316       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 22:26:13.218623       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0920 22:26:13.218671       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0920 22:26:13.218698       1 server_linux.go:169] "Using iptables Proxier"
	I0920 22:26:13.221531       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 22:26:13.222375       1 server.go:483] "Version info" version="v1.31.1"
	I0920 22:26:13.222503       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 22:26:13.224037       1 config.go:199] "Starting service config controller"
	I0920 22:26:13.224249       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 22:26:13.224356       1 config.go:328] "Starting node config controller"
	I0920 22:26:13.224378       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 22:26:13.224626       1 config.go:105] "Starting endpoint slice config controller"
	I0920 22:26:13.224654       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 22:26:13.324874       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 22:26:13.324961       1 shared_informer.go:320] Caches are synced for service config
	I0920 22:26:13.324942       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [92c8d91a89410b46ef61f2674cbc417df4d854deb17113a89c6ac3fbc9c0e8f2] <==
	W0920 22:26:03.710711       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 22:26:03.710720       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:03.710900       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 22:26:03.710934       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 22:26:04.525247       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 22:26:04.525406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.540500       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 22:26:04.540861       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.613244       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 22:26:04.613280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.652356       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 22:26:04.654559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.828020       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 22:26:04.828197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.867207       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 22:26:04.867664       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.918211       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 22:26:04.918320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.942806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 22:26:04.942862       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:04.962155       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 22:26:04.962254       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:26:05.170322       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 22:26:05.170491       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 22:26:07.392657       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 22:39:54 no-preload-032884 kubelet[3338]: E0920 22:39:54.258192    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xvbg9" podUID="33c2a642-1336-4e1c-98cf-a1941710c39f"
	Sep 20 22:39:56 no-preload-032884 kubelet[3338]: E0920 22:39:56.512159    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871996511751131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:39:56 no-preload-032884 kubelet[3338]: E0920 22:39:56.512203    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726871996511751131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:05 no-preload-032884 kubelet[3338]: E0920 22:40:05.257716    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xvbg9" podUID="33c2a642-1336-4e1c-98cf-a1941710c39f"
	Sep 20 22:40:06 no-preload-032884 kubelet[3338]: E0920 22:40:06.285292    3338 iptables.go:577] "Could not set up iptables canary" err=<
	Sep 20 22:40:06 no-preload-032884 kubelet[3338]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Sep 20 22:40:06 no-preload-032884 kubelet[3338]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 20 22:40:06 no-preload-032884 kubelet[3338]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 20 22:40:06 no-preload-032884 kubelet[3338]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 20 22:40:06 no-preload-032884 kubelet[3338]: E0920 22:40:06.515544    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872006514241663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:06 no-preload-032884 kubelet[3338]: E0920 22:40:06.515583    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872006514241663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:16 no-preload-032884 kubelet[3338]: E0920 22:40:16.517079    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872016516797707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:16 no-preload-032884 kubelet[3338]: E0920 22:40:16.517116    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872016516797707,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:17 no-preload-032884 kubelet[3338]: E0920 22:40:17.257227    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xvbg9" podUID="33c2a642-1336-4e1c-98cf-a1941710c39f"
	Sep 20 22:40:26 no-preload-032884 kubelet[3338]: E0920 22:40:26.519476    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872026518814234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:26 no-preload-032884 kubelet[3338]: E0920 22:40:26.519518    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872026518814234,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:28 no-preload-032884 kubelet[3338]: E0920 22:40:28.257594    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xvbg9" podUID="33c2a642-1336-4e1c-98cf-a1941710c39f"
	Sep 20 22:40:36 no-preload-032884 kubelet[3338]: E0920 22:40:36.522553    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872036522167386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:36 no-preload-032884 kubelet[3338]: E0920 22:40:36.522919    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872036522167386,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:39 no-preload-032884 kubelet[3338]: E0920 22:40:39.257048    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xvbg9" podUID="33c2a642-1336-4e1c-98cf-a1941710c39f"
	Sep 20 22:40:46 no-preload-032884 kubelet[3338]: E0920 22:40:46.525617    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872046524720206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:46 no-preload-032884 kubelet[3338]: E0920 22:40:46.525884    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872046524720206,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:52 no-preload-032884 kubelet[3338]: E0920 22:40:52.257153    3338 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xvbg9" podUID="33c2a642-1336-4e1c-98cf-a1941710c39f"
	Sep 20 22:40:56 no-preload-032884 kubelet[3338]: E0920 22:40:56.530917    3338 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872056530206391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 20 22:40:56 no-preload-032884 kubelet[3338]: E0920 22:40:56.531009    3338 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872056530206391,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [9995e2121841ea05690639d90ff5196cfa73946de17e659e8809a2a05a8a5c35] <==
	I0920 22:26:13.132117       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 22:26:13.160833       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 22:26:13.161029       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 22:26:13.189563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 22:26:13.189856       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-032884_9597dab5-9886-46ef-92f6-f8d75deb15c5!
	I0920 22:26:13.192953       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a4794ef-8fd8-42cc-b44b-c424ab7bc583", APIVersion:"v1", ResourceVersion:"391", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-032884_9597dab5-9886-46ef-92f6-f8d75deb15c5 became leader
	I0920 22:26:13.292827       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-032884_9597dab5-9886-46ef-92f6-f8d75deb15c5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-032884 -n no-preload-032884
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-032884 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-xvbg9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-032884 describe pod metrics-server-6867b74b74-xvbg9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-032884 describe pod metrics-server-6867b74b74-xvbg9: exit status 1 (65.548825ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-xvbg9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-032884 describe pod metrics-server-6867b74b74-xvbg9: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (334.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (160.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E0920 22:39:11.809840   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
E0920 22:40:09.131755   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.2:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-593412 -n old-k8s-version-593412
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-593412 -n old-k8s-version-593412: exit status 2 (222.308843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-593412" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-593412 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-593412 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.89µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-593412 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412: exit status 2 (225.173128ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-593412 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-593412 logs -n 25: (1.646872387s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-335786 -- sudo                         | cert-options-335786          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-335786                                 | cert-options-335786          | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:10 UTC |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:10 UTC | 20 Sep 24 22:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-696728                           | kubernetes-upgrade-696728    | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:11 UTC |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:11 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-032884             | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:12 UTC | 20 Sep 24 22:13 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-673228                              | cert-expiration-673228       | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	| delete  | -p                                                     | disable-driver-mounts-440582 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | disable-driver-mounts-440582                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:14 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-562750            | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC | 20 Sep 24 22:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:13 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-799783  | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-032884                  | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-032884                                   | no-preload-032884            | jenkins | v1.34.0 | 20 Sep 24 22:14 UTC | 20 Sep 24 22:26 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-593412        | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-562750                 | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-562750                                  | embed-certs-562750           | jenkins | v1.34.0 | 20 Sep 24 22:15 UTC | 20 Sep 24 22:24 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-799783       | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-799783 | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:25 UTC |
	|         | default-k8s-diff-port-799783                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:16 UTC | 20 Sep 24 22:17 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-593412             | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC | 20 Sep 24 22:17 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-593412                              | old-k8s-version-593412       | jenkins | v1.34.0 | 20 Sep 24 22:17 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 22:17:05
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 22:17:05.861648   65164 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:17:05.861774   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.861787   65164 out.go:358] Setting ErrFile to fd 2...
	I0920 22:17:05.861792   65164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:17:05.862021   65164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:17:05.862599   65164 out.go:352] Setting JSON to false
	I0920 22:17:05.863528   65164 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7174,"bootTime":1726863452,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:17:05.863630   65164 start.go:139] virtualization: kvm guest
	I0920 22:17:05.866077   65164 out.go:177] * [old-k8s-version-593412] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:17:05.867484   65164 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:17:05.867531   65164 notify.go:220] Checking for updates...
	I0920 22:17:05.869987   65164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:17:05.871483   65164 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:17:05.872745   65164 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:17:05.874062   65164 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:17:05.875089   65164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:17:05.876578   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:17:05.876975   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.877047   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.891847   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40137
	I0920 22:17:05.892301   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.892821   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.892841   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.893134   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.893312   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.895266   65164 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 22:17:05.896322   65164 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:17:05.896637   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:17:05.896671   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:17:05.912301   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37681
	I0920 22:17:05.912667   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:17:05.913170   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:17:05.913189   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:17:05.913475   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:17:05.913672   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:17:05.949827   65164 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 22:17:05.951258   65164 start.go:297] selected driver: kvm2
	I0920 22:17:05.951275   65164 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.951380   65164 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:17:05.952032   65164 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.952098   65164 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 22:17:05.967645   65164 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 22:17:05.968048   65164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:17:05.968078   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:17:05.968122   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:17:05.968161   65164 start.go:340] cluster config:
	{Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:17:05.968267   65164 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 22:17:05.970080   65164 out.go:177] * Starting "old-k8s-version-593412" primary control-plane node in "old-k8s-version-593412" cluster
	I0920 22:17:03.331236   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:06.403275   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:05.971180   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:17:05.971218   65164 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 22:17:05.971227   65164 cache.go:56] Caching tarball of preloaded images
	I0920 22:17:05.971292   65164 preload.go:172] Found /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0920 22:17:05.971301   65164 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0920 22:17:05.971394   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:17:05.971564   65164 start.go:360] acquireMachinesLock for old-k8s-version-593412: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:17:12.483250   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:15.555283   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:21.635247   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:24.707272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:30.787265   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:33.859227   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:39.939281   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:43.011250   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:49.091264   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:52.163272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:17:58.243272   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:01.315252   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:07.395256   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:10.467248   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:16.547257   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:19.619309   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:25.699278   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:28.771275   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:34.851234   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:37.923271   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:44.003246   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:47.075274   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:53.155254   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:18:56.227303   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:02.307307   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:05.379257   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:11.459278   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:14.531267   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:20.611255   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:23.683284   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:29.763270   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:32.835325   64076 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.19:22: connect: no route to host
	I0920 22:19:35.840483   64543 start.go:364] duration metric: took 3m48.294081485s to acquireMachinesLock for "embed-certs-562750"
	I0920 22:19:35.840533   64543 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:19:35.840542   64543 fix.go:54] fixHost starting: 
	I0920 22:19:35.840886   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:19:35.840934   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:19:35.856524   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36449
	I0920 22:19:35.856933   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:19:35.857405   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:19:35.857419   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:19:35.857754   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:19:35.857958   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:35.858095   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:19:35.859845   64543 fix.go:112] recreateIfNeeded on embed-certs-562750: state=Stopped err=<nil>
	I0920 22:19:35.859876   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	W0920 22:19:35.860023   64543 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:19:35.861973   64543 out.go:177] * Restarting existing kvm2 VM for "embed-certs-562750" ...
	I0920 22:19:35.863394   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Start
	I0920 22:19:35.863563   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring networks are active...
	I0920 22:19:35.864411   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring network default is active
	I0920 22:19:35.864746   64543 main.go:141] libmachine: (embed-certs-562750) Ensuring network mk-embed-certs-562750 is active
	I0920 22:19:35.865085   64543 main.go:141] libmachine: (embed-certs-562750) Getting domain xml...
	I0920 22:19:35.865728   64543 main.go:141] libmachine: (embed-certs-562750) Creating domain...
	I0920 22:19:37.094096   64543 main.go:141] libmachine: (embed-certs-562750) Waiting to get IP...
	I0920 22:19:37.095056   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.095580   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.095671   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.095567   66127 retry.go:31] will retry after 248.155801ms: waiting for machine to come up
	I0920 22:19:37.345108   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.345577   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.345634   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.345559   66127 retry.go:31] will retry after 340.822818ms: waiting for machine to come up
	I0920 22:19:35.837792   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:19:35.837839   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:19:35.838152   64076 buildroot.go:166] provisioning hostname "no-preload-032884"
	I0920 22:19:35.838177   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:19:35.838385   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:19:35.840336   64076 machine.go:96] duration metric: took 4m37.421385706s to provisionDockerMachine
	I0920 22:19:35.840382   64076 fix.go:56] duration metric: took 4m37.44364588s for fixHost
	I0920 22:19:35.840392   64076 start.go:83] releasing machines lock for "no-preload-032884", held for 4m37.443690605s
	W0920 22:19:35.840414   64076 start.go:714] error starting host: provision: host is not running
	W0920 22:19:35.840512   64076 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I0920 22:19:35.840521   64076 start.go:729] Will try again in 5 seconds ...
	I0920 22:19:37.688209   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:37.688587   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:37.688615   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:37.688534   66127 retry.go:31] will retry after 321.605412ms: waiting for machine to come up
	I0920 22:19:38.012304   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:38.012829   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:38.012853   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:38.012780   66127 retry.go:31] will retry after 475.245269ms: waiting for machine to come up
	I0920 22:19:38.489354   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:38.489733   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:38.489763   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:38.489687   66127 retry.go:31] will retry after 667.725945ms: waiting for machine to come up
	I0920 22:19:39.158494   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:39.158997   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:39.159024   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:39.158941   66127 retry.go:31] will retry after 582.98832ms: waiting for machine to come up
	I0920 22:19:39.743778   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:39.744147   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:39.744184   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:39.744092   66127 retry.go:31] will retry after 849.746424ms: waiting for machine to come up
	I0920 22:19:40.595098   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:40.595489   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:40.595518   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:40.595440   66127 retry.go:31] will retry after 1.349776305s: waiting for machine to come up
	I0920 22:19:41.947028   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:41.947449   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:41.947472   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:41.947422   66127 retry.go:31] will retry after 1.323416137s: waiting for machine to come up
	I0920 22:19:40.840969   64076 start.go:360] acquireMachinesLock for no-preload-032884: {Name:mk05e9a22914d0a2faa27183f9c04ab590ad3d64 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0920 22:19:43.272963   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:43.273397   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:43.273424   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:43.273344   66127 retry.go:31] will retry after 1.986214907s: waiting for machine to come up
	I0920 22:19:45.262662   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:45.263148   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:45.263170   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:45.263095   66127 retry.go:31] will retry after 2.68512379s: waiting for machine to come up
	I0920 22:19:47.950652   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:47.951043   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:47.951069   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:47.950977   66127 retry.go:31] will retry after 3.616930237s: waiting for machine to come up
	I0920 22:19:51.569337   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:51.569741   64543 main.go:141] libmachine: (embed-certs-562750) DBG | unable to find current IP address of domain embed-certs-562750 in network mk-embed-certs-562750
	I0920 22:19:51.569774   64543 main.go:141] libmachine: (embed-certs-562750) DBG | I0920 22:19:51.569692   66127 retry.go:31] will retry after 2.803702381s: waiting for machine to come up
	I0920 22:19:55.615900   64930 start.go:364] duration metric: took 3m10.214658006s to acquireMachinesLock for "default-k8s-diff-port-799783"
	I0920 22:19:55.615986   64930 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:19:55.615995   64930 fix.go:54] fixHost starting: 
	I0920 22:19:55.616427   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:19:55.616483   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:19:55.636346   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I0920 22:19:55.636780   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:19:55.637258   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:19:55.637279   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:19:55.637578   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:19:55.637770   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:19:55.637911   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:19:55.639942   64930 fix.go:112] recreateIfNeeded on default-k8s-diff-port-799783: state=Stopped err=<nil>
	I0920 22:19:55.639973   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	W0920 22:19:55.640144   64930 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:19:55.642492   64930 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-799783" ...
	I0920 22:19:54.376796   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.377333   64543 main.go:141] libmachine: (embed-certs-562750) Found IP for machine: 192.168.50.193
	I0920 22:19:54.377353   64543 main.go:141] libmachine: (embed-certs-562750) Reserving static IP address...
	I0920 22:19:54.377363   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has current primary IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.377737   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "embed-certs-562750", mac: "52:54:00:be:17:82", ip: "192.168.50.193"} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.377768   64543 main.go:141] libmachine: (embed-certs-562750) DBG | skip adding static IP to network mk-embed-certs-562750 - found existing host DHCP lease matching {name: "embed-certs-562750", mac: "52:54:00:be:17:82", ip: "192.168.50.193"}
	I0920 22:19:54.377787   64543 main.go:141] libmachine: (embed-certs-562750) Reserved static IP address: 192.168.50.193
	I0920 22:19:54.377804   64543 main.go:141] libmachine: (embed-certs-562750) Waiting for SSH to be available...
	I0920 22:19:54.377849   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Getting to WaitForSSH function...
	I0920 22:19:54.379913   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.380197   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.380236   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.380325   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Using SSH client type: external
	I0920 22:19:54.380350   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa (-rw-------)
	I0920 22:19:54.380382   64543 main.go:141] libmachine: (embed-certs-562750) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.193 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:19:54.380395   64543 main.go:141] libmachine: (embed-certs-562750) DBG | About to run SSH command:
	I0920 22:19:54.380405   64543 main.go:141] libmachine: (embed-certs-562750) DBG | exit 0
	I0920 22:19:54.507262   64543 main.go:141] libmachine: (embed-certs-562750) DBG | SSH cmd err, output: <nil>: 
	I0920 22:19:54.507667   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetConfigRaw
	I0920 22:19:54.508332   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:54.510768   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.511108   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.511146   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.511441   64543 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/config.json ...
	I0920 22:19:54.511672   64543 machine.go:93] provisionDockerMachine start ...
	I0920 22:19:54.511690   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:54.511883   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.514116   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.514496   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.514524   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.514646   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.514794   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.514931   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.515053   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.515206   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.515457   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.515472   64543 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:19:54.623421   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:19:54.623455   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.623694   64543 buildroot.go:166] provisioning hostname "embed-certs-562750"
	I0920 22:19:54.623717   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.623843   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.626435   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.626733   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.626754   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.626924   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.627122   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.627293   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.627436   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.627594   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.627749   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.627760   64543 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-562750 && echo "embed-certs-562750" | sudo tee /etc/hostname
	I0920 22:19:54.750707   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-562750
	
	I0920 22:19:54.750729   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.753239   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.753512   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.753544   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.753764   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.753968   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.754153   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.754312   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.754481   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:54.754650   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:54.754666   64543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-562750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-562750/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-562750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:19:54.872899   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:19:54.872925   64543 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:19:54.872961   64543 buildroot.go:174] setting up certificates
	I0920 22:19:54.872971   64543 provision.go:84] configureAuth start
	I0920 22:19:54.872979   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetMachineName
	I0920 22:19:54.873237   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:54.875851   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.876280   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.876302   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.876471   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.878729   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.879117   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.879143   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.879320   64543 provision.go:143] copyHostCerts
	I0920 22:19:54.879370   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:19:54.879378   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:19:54.879467   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:19:54.879581   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:19:54.879591   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:19:54.879624   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:19:54.879694   64543 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:19:54.879703   64543 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:19:54.879736   64543 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:19:54.879799   64543 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.embed-certs-562750 san=[127.0.0.1 192.168.50.193 embed-certs-562750 localhost minikube]
	I0920 22:19:54.981669   64543 provision.go:177] copyRemoteCerts
	I0920 22:19:54.981737   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:19:54.981762   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:54.984500   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.984792   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:54.984814   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:54.984986   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:54.985187   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:54.985305   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:54.985424   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.069012   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 22:19:55.094417   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:19:55.119895   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:19:55.145399   64543 provision.go:87] duration metric: took 272.417168ms to configureAuth
	I0920 22:19:55.145433   64543 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:19:55.145627   64543 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:19:55.145719   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.148502   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.148866   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.148891   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.149082   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.149272   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.149399   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.149528   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.149637   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:55.149785   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:55.149796   64543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:19:55.374395   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:19:55.374422   64543 machine.go:96] duration metric: took 862.73711ms to provisionDockerMachine
	I0920 22:19:55.374436   64543 start.go:293] postStartSetup for "embed-certs-562750" (driver="kvm2")
	I0920 22:19:55.374448   64543 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:19:55.374463   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.374798   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:19:55.374815   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.377262   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.377649   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.377671   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.377856   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.378054   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.378203   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.378327   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.462119   64543 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:19:55.466515   64543 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:19:55.466544   64543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:19:55.466621   64543 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:19:55.466720   64543 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:19:55.466846   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:19:55.476703   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:19:55.501943   64543 start.go:296] duration metric: took 127.489738ms for postStartSetup
	I0920 22:19:55.501987   64543 fix.go:56] duration metric: took 19.661445548s for fixHost
	I0920 22:19:55.502009   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.505054   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.505394   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.505420   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.505606   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.505826   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.505969   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.506160   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.506352   64543 main.go:141] libmachine: Using SSH client type: native
	I0920 22:19:55.506532   64543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.50.193 22 <nil> <nil>}
	I0920 22:19:55.506541   64543 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:19:55.615752   64543 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870795.575298210
	
	I0920 22:19:55.615773   64543 fix.go:216] guest clock: 1726870795.575298210
	I0920 22:19:55.615781   64543 fix.go:229] Guest: 2024-09-20 22:19:55.57529821 +0000 UTC Remote: 2024-09-20 22:19:55.501991982 +0000 UTC m=+248.100299917 (delta=73.306228ms)
	I0920 22:19:55.615805   64543 fix.go:200] guest clock delta is within tolerance: 73.306228ms
	I0920 22:19:55.615812   64543 start.go:83] releasing machines lock for "embed-certs-562750", held for 19.7753011s
	I0920 22:19:55.615850   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.616131   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:55.618697   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.619117   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.619144   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.619347   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.619859   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.620028   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:19:55.620110   64543 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:19:55.620162   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.620261   64543 ssh_runner.go:195] Run: cat /version.json
	I0920 22:19:55.620287   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:19:55.622796   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623204   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623233   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.623254   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623440   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.623601   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.623660   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:55.623681   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:55.623769   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.623880   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:19:55.623941   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.624046   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:19:55.624175   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:19:55.624291   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:19:55.731482   64543 ssh_runner.go:195] Run: systemctl --version
	I0920 22:19:55.738176   64543 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:19:55.887545   64543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:19:55.894532   64543 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:19:55.894610   64543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:19:55.911070   64543 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:19:55.911097   64543 start.go:495] detecting cgroup driver to use...
	I0920 22:19:55.911178   64543 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:19:55.928676   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:19:55.943462   64543 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:19:55.943543   64543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:19:55.960910   64543 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:19:55.978109   64543 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:19:56.101434   64543 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:19:56.238737   64543 docker.go:233] disabling docker service ...
	I0920 22:19:56.238814   64543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:19:56.254099   64543 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:19:56.268926   64543 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:19:56.417839   64543 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:19:56.548769   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:19:56.563311   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:19:56.583073   64543 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:19:56.583134   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.598212   64543 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:19:56.598287   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.610151   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.623499   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.637734   64543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:19:56.649751   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.665619   64543 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.689725   64543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:19:56.704698   64543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:19:56.716205   64543 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:19:56.716276   64543 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:19:56.734247   64543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:19:56.745725   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:19:56.879084   64543 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:19:56.978332   64543 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:19:56.978400   64543 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:19:56.985047   64543 start.go:563] Will wait 60s for crictl version
	I0920 22:19:56.985112   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:19:56.989231   64543 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:19:57.029037   64543 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:19:57.029126   64543 ssh_runner.go:195] Run: crio --version
	I0920 22:19:57.060838   64543 ssh_runner.go:195] Run: crio --version
	I0920 22:19:57.092838   64543 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:19:57.094132   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetIP
	I0920 22:19:57.097205   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:57.097597   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:19:57.097642   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:19:57.097865   64543 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0920 22:19:57.102433   64543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:19:57.116015   64543 kubeadm.go:883] updating cluster {Name:embed-certs-562750 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:19:57.116134   64543 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:19:57.116197   64543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:19:57.156317   64543 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:19:57.156380   64543 ssh_runner.go:195] Run: which lz4
	I0920 22:19:57.160502   64543 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:19:57.164982   64543 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:19:57.165024   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 22:19:55.643600   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Start
	I0920 22:19:55.643811   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring networks are active...
	I0920 22:19:55.644651   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring network default is active
	I0920 22:19:55.644941   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Ensuring network mk-default-k8s-diff-port-799783 is active
	I0920 22:19:55.645308   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Getting domain xml...
	I0920 22:19:55.645897   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Creating domain...
	I0920 22:19:56.961175   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting to get IP...
	I0920 22:19:56.961986   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:56.962442   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:56.962522   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:56.962429   66271 retry.go:31] will retry after 219.744487ms: waiting for machine to come up
	I0920 22:19:57.184180   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.184718   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.184751   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.184663   66271 retry.go:31] will retry after 331.58082ms: waiting for machine to come up
	I0920 22:19:57.518314   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.518907   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.518945   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.518878   66271 retry.go:31] will retry after 308.110588ms: waiting for machine to come up
	I0920 22:19:57.829213   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.829754   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:57.829782   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:57.829711   66271 retry.go:31] will retry after 421.186303ms: waiting for machine to come up
	I0920 22:19:58.252218   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.252662   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.252689   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:58.252618   66271 retry.go:31] will retry after 465.325985ms: waiting for machine to come up
	I0920 22:19:58.719232   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.719741   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:58.719769   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:58.719679   66271 retry.go:31] will retry after 865.976725ms: waiting for machine to come up
	I0920 22:19:59.587325   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:19:59.587694   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:19:59.587725   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:19:59.587654   66271 retry.go:31] will retry after 1.114189927s: waiting for machine to come up
	I0920 22:19:58.663210   64543 crio.go:462] duration metric: took 1.502738525s to copy over tarball
	I0920 22:19:58.663299   64543 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:00.798877   64543 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.135542137s)
	I0920 22:20:00.798912   64543 crio.go:469] duration metric: took 2.135668164s to extract the tarball
	I0920 22:20:00.798922   64543 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:00.838877   64543 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:00.884361   64543 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:20:00.884383   64543 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:20:00.884391   64543 kubeadm.go:934] updating node { 192.168.50.193 8443 v1.31.1 crio true true} ...
	I0920 22:20:00.884492   64543 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-562750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.193
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:00.884559   64543 ssh_runner.go:195] Run: crio config
	I0920 22:20:00.930933   64543 cni.go:84] Creating CNI manager for ""
	I0920 22:20:00.930987   64543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:00.931000   64543 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:00.931026   64543 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.193 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-562750 NodeName:embed-certs-562750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.193"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.193 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:20:00.931179   64543 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.193
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-562750"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.193
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.193"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:00.931237   64543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:20:00.941926   64543 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:00.942020   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:00.952488   64543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0920 22:20:00.970386   64543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:00.987879   64543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0920 22:20:01.006743   64543 ssh_runner.go:195] Run: grep 192.168.50.193	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:01.010865   64543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.193	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:01.023730   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:01.151649   64543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:01.169768   64543 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750 for IP: 192.168.50.193
	I0920 22:20:01.169804   64543 certs.go:194] generating shared ca certs ...
	I0920 22:20:01.169823   64543 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:01.170011   64543 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:01.170073   64543 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:01.170083   64543 certs.go:256] generating profile certs ...
	I0920 22:20:01.170195   64543 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/client.key
	I0920 22:20:01.170276   64543 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.key.1624b01a
	I0920 22:20:01.170322   64543 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.key
	I0920 22:20:01.170470   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:01.170511   64543 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:01.170524   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:01.170563   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:01.170597   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:01.170625   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:01.170675   64543 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:01.171652   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:01.235898   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:01.276664   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:01.313610   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:01.346451   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 22:20:01.373805   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:20:01.401908   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:01.428305   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/embed-certs-562750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:01.460649   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:01.487664   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:01.513122   64543 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:01.539736   64543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:01.559568   64543 ssh_runner.go:195] Run: openssl version
	I0920 22:20:01.565838   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:01.579013   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.584012   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.584090   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:01.590609   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:01.603863   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:01.616932   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.621762   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.621816   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:01.627823   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:01.640444   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:01.654236   64543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.659030   64543 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.659087   64543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:01.665104   64543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:01.678337   64543 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:01.683177   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:01.689492   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:01.695839   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:01.702517   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:01.709021   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:01.715624   64543 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:01.721888   64543 kubeadm.go:392] StartCluster: {Name:embed-certs-562750 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-562750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:01.721981   64543 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:01.722029   64543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:01.771458   64543 cri.go:89] found id: ""
	I0920 22:20:01.771526   64543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:01.785814   64543 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:01.785833   64543 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:01.785887   64543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:01.796192   64543 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:01.797197   64543 kubeconfig.go:125] found "embed-certs-562750" server: "https://192.168.50.193:8443"
	I0920 22:20:01.799185   64543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:01.809313   64543 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.193
	I0920 22:20:01.809345   64543 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:01.809373   64543 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:01.809437   64543 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:01.846530   64543 cri.go:89] found id: ""
	I0920 22:20:01.846605   64543 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:01.864522   64543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:01.874792   64543 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:01.874817   64543 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:01.874877   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:20:01.884330   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:01.884401   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:01.893899   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:20:01.903119   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:01.903176   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:01.913163   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:20:01.924161   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:01.924216   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:01.935472   64543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:20:01.944837   64543 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:01.944916   64543 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:01.955372   64543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:01.966120   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:02.094557   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:00.703843   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:00.704393   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:00.704421   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:00.704320   66271 retry.go:31] will retry after 1.075959086s: waiting for machine to come up
	I0920 22:20:01.781352   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:01.781785   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:01.781815   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:01.781745   66271 retry.go:31] will retry after 1.355153165s: waiting for machine to come up
	I0920 22:20:03.139404   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:03.139900   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:03.139926   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:03.139858   66271 retry.go:31] will retry after 1.818194289s: waiting for machine to come up
	I0920 22:20:04.959670   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:04.960173   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:04.960242   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:04.960146   66271 retry.go:31] will retry after 2.18132962s: waiting for machine to come up
	I0920 22:20:03.143089   64543 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.048498511s)
	I0920 22:20:03.143125   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.364380   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.431644   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:03.547063   64543 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:03.547154   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.047320   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.548259   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:04.577405   64543 api_server.go:72] duration metric: took 1.03034068s to wait for apiserver process to appear ...
	I0920 22:20:04.577433   64543 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:20:04.577456   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.359356   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:07.359397   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:07.359415   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.433696   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:07.433729   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:07.578317   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:07.605834   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:07.605869   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:08.078218   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:08.083148   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:08.083174   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:08.577638   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:08.583009   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:08.583039   64543 api_server.go:103] status: https://192.168.50.193:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:09.077549   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:20:09.081990   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 200:
	ok
	I0920 22:20:09.088232   64543 api_server.go:141] control plane version: v1.31.1
	I0920 22:20:09.088256   64543 api_server.go:131] duration metric: took 4.510816425s to wait for apiserver health ...
	I0920 22:20:09.088263   64543 cni.go:84] Creating CNI manager for ""
	I0920 22:20:09.088271   64543 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:09.090388   64543 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:20:07.143268   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:07.143767   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:07.143806   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:07.143720   66271 retry.go:31] will retry after 3.514856392s: waiting for machine to come up
	I0920 22:20:09.091750   64543 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:20:09.103171   64543 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:20:09.123617   64543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:20:09.135750   64543 system_pods.go:59] 8 kube-system pods found
	I0920 22:20:09.135778   64543 system_pods.go:61] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:20:09.135790   64543 system_pods.go:61] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:20:09.135800   64543 system_pods.go:61] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:20:09.135809   64543 system_pods.go:61] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:20:09.135815   64543 system_pods.go:61] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:20:09.135838   64543 system_pods.go:61] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:20:09.135843   64543 system_pods.go:61] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:20:09.135851   64543 system_pods.go:61] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:20:09.135864   64543 system_pods.go:74] duration metric: took 12.228085ms to wait for pod list to return data ...
	I0920 22:20:09.135870   64543 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:20:09.140570   64543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:20:09.140598   64543 node_conditions.go:123] node cpu capacity is 2
	I0920 22:20:09.140609   64543 node_conditions.go:105] duration metric: took 4.732339ms to run NodePressure ...
	I0920 22:20:09.140625   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:09.411017   64543 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:20:09.423634   64543 kubeadm.go:739] kubelet initialised
	I0920 22:20:09.423657   64543 kubeadm.go:740] duration metric: took 12.617105ms waiting for restarted kubelet to initialise ...
	I0920 22:20:09.423665   64543 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:09.427934   64543 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.438355   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.438377   64543 pod_ready.go:82] duration metric: took 10.418416ms for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.438386   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.438403   64543 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.447837   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "etcd-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.447868   64543 pod_ready.go:82] duration metric: took 9.456346ms for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.447877   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "etcd-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.447886   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.456304   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.456328   64543 pod_ready.go:82] duration metric: took 8.434885ms for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.456338   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.456344   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.528428   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.528451   64543 pod_ready.go:82] duration metric: took 72.099227ms for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.528460   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.528465   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:09.927405   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-proxy-wsrnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.927434   64543 pod_ready.go:82] duration metric: took 398.961394ms for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:09.927443   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-proxy-wsrnw" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:09.927449   64543 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:10.327103   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.327128   64543 pod_ready.go:82] duration metric: took 399.672325ms for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:10.327140   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.327153   64543 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:10.727093   64543 pod_ready.go:98] node "embed-certs-562750" hosting pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.727121   64543 pod_ready.go:82] duration metric: took 399.958241ms for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:10.727130   64543 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-562750" hosting pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:10.727137   64543 pod_ready.go:39] duration metric: took 1.303464561s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:10.727154   64543 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:20:10.741332   64543 ops.go:34] apiserver oom_adj: -16
	I0920 22:20:10.741356   64543 kubeadm.go:597] duration metric: took 8.955512402s to restartPrimaryControlPlane
	I0920 22:20:10.741367   64543 kubeadm.go:394] duration metric: took 9.019487096s to StartCluster
	I0920 22:20:10.741382   64543 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:10.741459   64543 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:20:10.744903   64543 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:10.745155   64543 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.193 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:20:10.745198   64543 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:20:10.745344   64543 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-562750"
	I0920 22:20:10.745366   64543 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-562750"
	W0920 22:20:10.745375   64543 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:20:10.745376   64543 addons.go:69] Setting metrics-server=true in profile "embed-certs-562750"
	I0920 22:20:10.745398   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.745397   64543 config.go:182] Loaded profile config "embed-certs-562750": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:10.745409   64543 addons.go:234] Setting addon metrics-server=true in "embed-certs-562750"
	W0920 22:20:10.745419   64543 addons.go:243] addon metrics-server should already be in state true
	I0920 22:20:10.745344   64543 addons.go:69] Setting default-storageclass=true in profile "embed-certs-562750"
	I0920 22:20:10.745451   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.745468   64543 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-562750"
	I0920 22:20:10.745705   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745734   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745773   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.745853   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.745943   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.746025   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.747255   64543 out.go:177] * Verifying Kubernetes components...
	I0920 22:20:10.749042   64543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:10.761119   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45269
	I0920 22:20:10.761143   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33879
	I0920 22:20:10.761143   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0920 22:20:10.761517   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.761598   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.761643   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.762025   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762040   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762150   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762157   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.762169   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762172   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.762352   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762535   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762545   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.762611   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.763136   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.763137   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.763182   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.763208   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.766161   64543 addons.go:234] Setting addon default-storageclass=true in "embed-certs-562750"
	W0920 22:20:10.766184   64543 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:20:10.766215   64543 host.go:66] Checking if "embed-certs-562750" exists ...
	I0920 22:20:10.766587   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.766645   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.779678   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33311
	I0920 22:20:10.780153   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.780716   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.780739   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.781116   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.781184   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45417
	I0920 22:20:10.781406   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.781548   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.782023   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.782037   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.782492   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.782496   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39425
	I0920 22:20:10.782988   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.783141   64543 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:10.783174   64543 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:10.783312   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.783458   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.783479   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.783861   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.784058   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.785625   64543 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:10.785682   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.787202   64543 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:20:10.787263   64543 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:20:10.787276   64543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:20:10.787293   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.788800   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:20:10.788820   64543 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:20:10.788838   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.790560   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.791094   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.791126   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.791400   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.791591   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.791735   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.791873   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.792691   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.793146   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.793161   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.793440   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.793640   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.793776   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.793904   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.801231   64543 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36363
	I0920 22:20:10.801612   64543 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:10.802055   64543 main.go:141] libmachine: Using API Version  1
	I0920 22:20:10.802078   64543 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:10.802442   64543 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:10.802720   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetState
	I0920 22:20:10.804333   64543 main.go:141] libmachine: (embed-certs-562750) Calling .DriverName
	I0920 22:20:10.804544   64543 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:20:10.804561   64543 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:20:10.804574   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHHostname
	I0920 22:20:10.807883   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.808403   64543 main.go:141] libmachine: (embed-certs-562750) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:be:17:82", ip: ""} in network mk-embed-certs-562750: {Iface:virbr2 ExpiryTime:2024-09-20 23:19:47 +0000 UTC Type:0 Mac:52:54:00:be:17:82 Iaid: IPaddr:192.168.50.193 Prefix:24 Hostname:embed-certs-562750 Clientid:01:52:54:00:be:17:82}
	I0920 22:20:10.808425   64543 main.go:141] libmachine: (embed-certs-562750) DBG | domain embed-certs-562750 has defined IP address 192.168.50.193 and MAC address 52:54:00:be:17:82 in network mk-embed-certs-562750
	I0920 22:20:10.808636   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHPort
	I0920 22:20:10.808809   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHKeyPath
	I0920 22:20:10.808949   64543 main.go:141] libmachine: (embed-certs-562750) Calling .GetSSHUsername
	I0920 22:20:10.809102   64543 sshutil.go:53] new ssh client: &{IP:192.168.50.193 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/embed-certs-562750/id_rsa Username:docker}
	I0920 22:20:10.974334   64543 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:10.997919   64543 node_ready.go:35] waiting up to 6m0s for node "embed-certs-562750" to be "Ready" ...
	I0920 22:20:11.061767   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:20:11.079795   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:20:11.079817   64543 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:20:11.097101   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:20:11.144947   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:20:11.144971   64543 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:20:11.217680   64543 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:20:11.217704   64543 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:20:11.269307   64543 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:20:12.132770   64543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070956267s)
	I0920 22:20:12.132827   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.132840   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.132840   64543 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.035704985s)
	I0920 22:20:12.132882   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.132898   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133151   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133183   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133202   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133217   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.133221   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133225   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133248   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133262   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133285   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.133296   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.133440   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133456   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.133466   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133523   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.133525   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.133534   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.139359   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.139373   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.139612   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.139628   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.139643   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177345   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.177367   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.177669   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177689   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.177705   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.177715   64543 main.go:141] libmachine: Making call to close driver server
	I0920 22:20:12.177724   64543 main.go:141] libmachine: (embed-certs-562750) Calling .Close
	I0920 22:20:12.177933   64543 main.go:141] libmachine: (embed-certs-562750) DBG | Closing plugin on server side
	I0920 22:20:12.177974   64543 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:20:12.177984   64543 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:20:12.177998   64543 addons.go:475] Verifying addon metrics-server=true in "embed-certs-562750"
	I0920 22:20:12.180228   64543 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 22:20:12.181532   64543 addons.go:510] duration metric: took 1.436346895s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 22:20:15.124170   65164 start.go:364] duration metric: took 3m9.152575828s to acquireMachinesLock for "old-k8s-version-593412"
	I0920 22:20:15.124246   65164 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:20:15.124258   65164 fix.go:54] fixHost starting: 
	I0920 22:20:15.124618   65164 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:15.124669   65164 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:15.144438   65164 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0920 22:20:15.144935   65164 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:15.145417   65164 main.go:141] libmachine: Using API Version  1
	I0920 22:20:15.145446   65164 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:15.145757   65164 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:15.145943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:15.146103   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetState
	I0920 22:20:15.147740   65164 fix.go:112] recreateIfNeeded on old-k8s-version-593412: state=Stopped err=<nil>
	I0920 22:20:15.147770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	W0920 22:20:15.147919   65164 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:20:15.150120   65164 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-593412" ...
	I0920 22:20:10.659920   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:10.660290   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | unable to find current IP address of domain default-k8s-diff-port-799783 in network mk-default-k8s-diff-port-799783
	I0920 22:20:10.660324   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | I0920 22:20:10.660245   66271 retry.go:31] will retry after 3.009712634s: waiting for machine to come up
	I0920 22:20:13.672538   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.673101   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has current primary IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.673132   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Found IP for machine: 192.168.39.94
	I0920 22:20:13.673144   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Reserving static IP address...
	I0920 22:20:13.673601   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-799783", mac: "52:54:00:30:89:df", ip: "192.168.39.94"} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.673657   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | skip adding static IP to network mk-default-k8s-diff-port-799783 - found existing host DHCP lease matching {name: "default-k8s-diff-port-799783", mac: "52:54:00:30:89:df", ip: "192.168.39.94"}
	I0920 22:20:13.673673   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Reserved static IP address: 192.168.39.94
	I0920 22:20:13.673692   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Waiting for SSH to be available...
	I0920 22:20:13.673706   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Getting to WaitForSSH function...
	I0920 22:20:13.675591   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.675941   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.675961   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.676120   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Using SSH client type: external
	I0920 22:20:13.676150   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa (-rw-------)
	I0920 22:20:13.676183   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:13.676199   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | About to run SSH command:
	I0920 22:20:13.676212   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | exit 0
	I0920 22:20:13.803369   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:13.803683   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetConfigRaw
	I0920 22:20:13.804303   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:13.806544   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.806902   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.806932   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.807178   64930 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/config.json ...
	I0920 22:20:13.807442   64930 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:13.807465   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:13.807682   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:13.809783   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.810027   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.810049   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.810221   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:13.810472   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.810628   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.810727   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:13.810875   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:13.811088   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:13.811112   64930 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:13.923218   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:13.923249   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:13.923489   64930 buildroot.go:166] provisioning hostname "default-k8s-diff-port-799783"
	I0920 22:20:13.923512   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:13.923657   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:13.926007   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.926408   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:13.926439   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:13.926613   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:13.926775   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.926915   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:13.927071   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:13.927227   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:13.927389   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:13.927401   64930 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-799783 && echo "default-k8s-diff-port-799783" | sudo tee /etc/hostname
	I0920 22:20:14.058656   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-799783
	
	I0920 22:20:14.058686   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.061487   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.061914   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.061950   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.062157   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.062352   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.062515   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.062675   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.062840   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:14.063061   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:14.063087   64930 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-799783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-799783/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-799783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:14.190755   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:14.190785   64930 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:14.190818   64930 buildroot.go:174] setting up certificates
	I0920 22:20:14.190831   64930 provision.go:84] configureAuth start
	I0920 22:20:14.190848   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetMachineName
	I0920 22:20:14.191114   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:14.194083   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.194504   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.194533   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.194705   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.197104   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.197415   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.197453   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.197649   64930 provision.go:143] copyHostCerts
	I0920 22:20:14.197710   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:14.197733   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:14.197797   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:14.197911   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:14.197924   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:14.197952   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:14.198025   64930 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:14.198035   64930 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:14.198062   64930 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:14.198128   64930 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-799783 san=[127.0.0.1 192.168.39.94 default-k8s-diff-port-799783 localhost minikube]
	I0920 22:20:14.450887   64930 provision.go:177] copyRemoteCerts
	I0920 22:20:14.450953   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:14.451010   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.453547   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.453849   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.453873   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.454064   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.454256   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.454424   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.454547   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:14.541380   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:14.566560   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0920 22:20:14.596742   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:20:14.629406   64930 provision.go:87] duration metric: took 438.556575ms to configureAuth
	I0920 22:20:14.629438   64930 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:14.629656   64930 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:14.629731   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.632710   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.633129   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.633160   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.633403   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.633638   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.633823   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.633968   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.634151   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:14.634380   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:14.634404   64930 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:14.870302   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:14.870327   64930 machine.go:96] duration metric: took 1.062871025s to provisionDockerMachine
	I0920 22:20:14.870338   64930 start.go:293] postStartSetup for "default-k8s-diff-port-799783" (driver="kvm2")
	I0920 22:20:14.870348   64930 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:14.870363   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:14.870675   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:14.870703   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:14.873574   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.873946   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:14.873990   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:14.874110   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:14.874306   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:14.874460   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:14.874604   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:14.963054   64930 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:14.967939   64930 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:14.967973   64930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:14.968056   64930 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:14.968155   64930 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:14.968292   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:14.978976   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:15.005989   64930 start.go:296] duration metric: took 135.637475ms for postStartSetup
	I0920 22:20:15.006028   64930 fix.go:56] duration metric: took 19.390035006s for fixHost
	I0920 22:20:15.006046   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.008572   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.008972   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.009003   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.009176   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.009355   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.009516   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.009637   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.009793   64930 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:15.009948   64930 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0920 22:20:15.009958   64930 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:15.124032   64930 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870815.095821045
	
	I0920 22:20:15.124057   64930 fix.go:216] guest clock: 1726870815.095821045
	I0920 22:20:15.124067   64930 fix.go:229] Guest: 2024-09-20 22:20:15.095821045 +0000 UTC Remote: 2024-09-20 22:20:15.006032337 +0000 UTC m=+209.748283721 (delta=89.788708ms)
	I0920 22:20:15.124093   64930 fix.go:200] guest clock delta is within tolerance: 89.788708ms
	I0920 22:20:15.124099   64930 start.go:83] releasing machines lock for "default-k8s-diff-port-799783", held for 19.50814158s
	I0920 22:20:15.124126   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.124385   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:15.127159   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.127522   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.127553   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.127724   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128290   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128438   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:20:15.128550   64930 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:15.128593   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.128689   64930 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:15.128707   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:20:15.131485   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.131862   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.131934   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.131963   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.132143   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.132343   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.132377   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:15.132418   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:15.132561   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:20:15.132624   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.132703   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:20:15.132789   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:15.132812   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:20:15.132967   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:20:15.239910   64930 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:15.247781   64930 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:15.151593   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .Start
	I0920 22:20:15.151807   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring networks are active...
	I0920 22:20:15.152666   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network default is active
	I0920 22:20:15.153017   65164 main.go:141] libmachine: (old-k8s-version-593412) Ensuring network mk-old-k8s-version-593412 is active
	I0920 22:20:15.153520   65164 main.go:141] libmachine: (old-k8s-version-593412) Getting domain xml...
	I0920 22:20:15.154306   65164 main.go:141] libmachine: (old-k8s-version-593412) Creating domain...
	I0920 22:20:15.405701   64930 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:15.413454   64930 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:15.413546   64930 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:15.431606   64930 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:15.431631   64930 start.go:495] detecting cgroup driver to use...
	I0920 22:20:15.431698   64930 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:15.450077   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:15.464156   64930 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:15.464229   64930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:15.483489   64930 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:15.498873   64930 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:15.614354   64930 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:15.817412   64930 docker.go:233] disabling docker service ...
	I0920 22:20:15.817495   64930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:15.833184   64930 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:15.849521   64930 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:15.997696   64930 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:16.138329   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:16.153302   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:16.172896   64930 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:20:16.172967   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.184383   64930 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:16.184448   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.196869   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.208295   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.220098   64930 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:16.231777   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.243180   64930 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.263881   64930 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:16.275250   64930 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:16.291764   64930 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:16.291839   64930 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:16.307082   64930 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:16.318243   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:16.459321   64930 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:16.563304   64930 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:16.563369   64930 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:16.568520   64930 start.go:563] Will wait 60s for crictl version
	I0920 22:20:16.568586   64930 ssh_runner.go:195] Run: which crictl
	I0920 22:20:16.573573   64930 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:16.617596   64930 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:16.617692   64930 ssh_runner.go:195] Run: crio --version
	I0920 22:20:16.656700   64930 ssh_runner.go:195] Run: crio --version
	I0920 22:20:16.691226   64930 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:20:13.001784   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:15.502542   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:16.692645   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetIP
	I0920 22:20:16.695792   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:16.696162   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:20:16.696189   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:20:16.696457   64930 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:16.700827   64930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:16.713577   64930 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-799783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:16.713724   64930 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:20:16.713798   64930 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:16.753538   64930 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:20:16.753621   64930 ssh_runner.go:195] Run: which lz4
	I0920 22:20:16.758261   64930 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:20:16.763234   64930 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:20:16.763282   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I0920 22:20:18.277256   64930 crio.go:462] duration metric: took 1.519010373s to copy over tarball
	I0920 22:20:18.277326   64930 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:16.512533   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting to get IP...
	I0920 22:20:16.513609   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.514030   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.514105   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.514023   66497 retry.go:31] will retry after 283.450738ms: waiting for machine to come up
	I0920 22:20:16.799590   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:16.800093   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:16.800125   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:16.800032   66497 retry.go:31] will retry after 325.332719ms: waiting for machine to come up
	I0920 22:20:17.127313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.127784   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.127809   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.127762   66497 retry.go:31] will retry after 407.300726ms: waiting for machine to come up
	I0920 22:20:17.537352   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.537853   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.537884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.537804   66497 retry.go:31] will retry after 376.483945ms: waiting for machine to come up
	I0920 22:20:17.916227   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:17.916679   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:17.916706   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:17.916646   66497 retry.go:31] will retry after 471.842767ms: waiting for machine to come up
	I0920 22:20:18.390326   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:18.390817   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:18.390862   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:18.390788   66497 retry.go:31] will retry after 897.259517ms: waiting for machine to come up
	I0920 22:20:19.289834   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:19.290347   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:19.290376   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:19.290289   66497 retry.go:31] will retry after 1.172691106s: waiting for machine to come up
	I0920 22:20:20.464050   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:20.464487   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:20.464509   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:20.464443   66497 retry.go:31] will retry after 1.205371228s: waiting for machine to come up
	I0920 22:20:17.502934   64543 node_ready.go:53] node "embed-certs-562750" has status "Ready":"False"
	I0920 22:20:18.003595   64543 node_ready.go:49] node "embed-certs-562750" has status "Ready":"True"
	I0920 22:20:18.003623   64543 node_ready.go:38] duration metric: took 7.005667163s for node "embed-certs-562750" to be "Ready" ...
	I0920 22:20:18.003635   64543 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:18.018682   64543 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:20.027253   64543 pod_ready.go:103] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:20.404573   64930 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.127218766s)
	I0920 22:20:20.404602   64930 crio.go:469] duration metric: took 2.127314313s to extract the tarball
	I0920 22:20:20.404613   64930 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:20.443060   64930 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:20.486174   64930 crio.go:514] all images are preloaded for cri-o runtime.
	I0920 22:20:20.486200   64930 cache_images.go:84] Images are preloaded, skipping loading
	I0920 22:20:20.486210   64930 kubeadm.go:934] updating node { 192.168.39.94 8444 v1.31.1 crio true true} ...
	I0920 22:20:20.486327   64930 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-799783 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:20.486422   64930 ssh_runner.go:195] Run: crio config
	I0920 22:20:20.539117   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:20:20.539141   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:20.539151   64930 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:20.539184   64930 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-799783 NodeName:default-k8s-diff-port-799783 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:20:20.539375   64930 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-799783"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:20.539450   64930 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:20:20.552286   64930 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:20.552358   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:20.563964   64930 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0920 22:20:20.582857   64930 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:20.601602   64930 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0920 22:20:20.619624   64930 ssh_runner.go:195] Run: grep 192.168.39.94	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:20.623667   64930 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:20.636950   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:20.758158   64930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:20.776263   64930 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783 for IP: 192.168.39.94
	I0920 22:20:20.776291   64930 certs.go:194] generating shared ca certs ...
	I0920 22:20:20.776307   64930 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:20.776465   64930 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:20.776529   64930 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:20.776543   64930 certs.go:256] generating profile certs ...
	I0920 22:20:20.776644   64930 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.key
	I0920 22:20:20.776722   64930 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.key.a6c75af2
	I0920 22:20:20.776787   64930 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.key
	I0920 22:20:20.776952   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:20.776999   64930 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:20.777013   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:20.777050   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:20.777083   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:20.777121   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:20.777174   64930 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:20.777796   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:20.829458   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:20.879008   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:20.914899   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:20.945905   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0920 22:20:20.981144   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:20:21.011815   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:21.037214   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:21.062338   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:21.088458   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:21.115033   64930 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:21.138611   64930 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:21.156547   64930 ssh_runner.go:195] Run: openssl version
	I0920 22:20:21.162710   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:21.175725   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.180671   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.180733   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:21.186897   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:21.199980   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:21.212840   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.217656   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.217710   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:21.223561   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:21.236627   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:21.249706   64930 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.254627   64930 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.254693   64930 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:21.260555   64930 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:21.272330   64930 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:21.277146   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:21.283637   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:21.289555   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:21.296395   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:21.302879   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:21.309163   64930 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:21.315778   64930 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-799783 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-799783 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:21.315890   64930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:21.315953   64930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:21.358833   64930 cri.go:89] found id: ""
	I0920 22:20:21.358895   64930 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:21.370044   64930 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:21.370067   64930 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:21.370110   64930 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:21.380782   64930 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:21.381858   64930 kubeconfig.go:125] found "default-k8s-diff-port-799783" server: "https://192.168.39.94:8444"
	I0920 22:20:21.383864   64930 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:21.394623   64930 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.94
	I0920 22:20:21.394659   64930 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:21.394692   64930 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:21.394781   64930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:21.434763   64930 cri.go:89] found id: ""
	I0920 22:20:21.434837   64930 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:21.452364   64930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:21.463047   64930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:21.463069   64930 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:21.463120   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 22:20:21.473382   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:21.473435   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:21.483948   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 22:20:21.494051   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:21.494116   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:21.504392   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 22:20:21.514513   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:21.514586   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:21.525767   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 22:20:21.535959   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:21.536042   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:21.546544   64930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:21.559961   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:21.700177   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.039526   64930 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.339307973s)
	I0920 22:20:23.039563   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.240821   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.322946   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:23.419008   64930 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:23.419163   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:23.919775   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:24.419574   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:24.458386   64930 api_server.go:72] duration metric: took 1.039374058s to wait for apiserver process to appear ...
	I0920 22:20:24.458431   64930 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:20:24.458457   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:24.459045   64930 api_server.go:269] stopped: https://192.168.39.94:8444/healthz: Get "https://192.168.39.94:8444/healthz": dial tcp 192.168.39.94:8444: connect: connection refused
	I0920 22:20:24.958794   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:21.671904   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:21.672382   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:21.672412   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:21.672336   66497 retry.go:31] will retry after 1.261816036s: waiting for machine to come up
	I0920 22:20:22.935576   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:22.936165   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:22.936194   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:22.936101   66497 retry.go:31] will retry after 2.160717711s: waiting for machine to come up
	I0920 22:20:25.098788   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:25.099242   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:25.099264   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:25.099201   66497 retry.go:31] will retry after 2.399420445s: waiting for machine to come up
	I0920 22:20:22.764833   64543 pod_ready.go:103] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:25.030167   64543 pod_ready.go:93] pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.030195   64543 pod_ready.go:82] duration metric: took 7.011481892s for pod "coredns-7c65d6cfc9-46wxm" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.030207   64543 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.039905   64543 pod_ready.go:93] pod "etcd-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.039987   64543 pod_ready.go:82] duration metric: took 9.771571ms for pod "etcd-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.040012   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.049855   64543 pod_ready.go:93] pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.049880   64543 pod_ready.go:82] duration metric: took 9.849676ms for pod "kube-apiserver-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.049891   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.056105   64543 pod_ready.go:93] pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.056128   64543 pod_ready.go:82] duration metric: took 6.228132ms for pod "kube-controller-manager-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.056139   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.061206   64543 pod_ready.go:93] pod "kube-proxy-wsrnw" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.061227   64543 pod_ready.go:82] duration metric: took 5.080531ms for pod "kube-proxy-wsrnw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.061239   64543 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.423902   64543 pod_ready.go:93] pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:25.423928   64543 pod_ready.go:82] duration metric: took 362.681392ms for pod "kube-scheduler-embed-certs-562750" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:25.423941   64543 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:27.432293   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:27.551115   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:27.551160   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:27.551182   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:27.582342   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:20:27.582378   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:20:27.958626   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:27.964667   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:27.964700   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:28.459321   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:28.463795   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:20:28.463829   64930 api_server.go:103] status: https://192.168.39.94:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:20:28.959398   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:20:28.965916   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 200:
	ok
	I0920 22:20:28.978561   64930 api_server.go:141] control plane version: v1.31.1
	I0920 22:20:28.978588   64930 api_server.go:131] duration metric: took 4.520149944s to wait for apiserver health ...
	I0920 22:20:28.978597   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:20:28.978612   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:28.980359   64930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:20:28.981786   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:20:28.999040   64930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:20:29.044999   64930 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:20:29.061533   64930 system_pods.go:59] 8 kube-system pods found
	I0920 22:20:29.061578   64930 system_pods.go:61] "coredns-7c65d6cfc9-nl65c" [a5d6f5c6-aeef-4f0e-8a96-dcddc7d261d9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:20:29.061592   64930 system_pods.go:61] "etcd-default-k8s-diff-port-799783" [e23bc34a-f559-434e-bc61-30f010efe60b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:20:29.061603   64930 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799783" [acbaa020-f0e3-4e35-a4ab-28de39620eee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:20:29.061610   64930 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799783" [4f08a0c8-9f92-44ba-a0ae-3f61f50bad54] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:20:29.061620   64930 system_pods.go:61] "kube-proxy-mqkfs" [8fe12e1f-7fdc-4bfc-8d11-3ef08212c17c] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0920 22:20:29.061633   64930 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799783" [8373aa94-7b16-4f5a-ab26-bf778e76cecf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:20:29.061641   64930 system_pods.go:61] "metrics-server-6867b74b74-4xsdw" [663691e2-a038-4ba4-b0ee-d7e1e88b9c05] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:20:29.061652   64930 system_pods.go:61] "storage-provisioner" [95a6ef93-5fdd-465b-8d34-7b97d56af757] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0920 22:20:29.061662   64930 system_pods.go:74] duration metric: took 16.629791ms to wait for pod list to return data ...
	I0920 22:20:29.061674   64930 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:20:29.066297   64930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:20:29.066329   64930 node_conditions.go:123] node cpu capacity is 2
	I0920 22:20:29.066341   64930 node_conditions.go:105] duration metric: took 4.659117ms to run NodePressure ...
	I0920 22:20:29.066360   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:29.337598   64930 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:20:29.345606   64930 kubeadm.go:739] kubelet initialised
	I0920 22:20:29.345637   64930 kubeadm.go:740] duration metric: took 8.0125ms waiting for restarted kubelet to initialise ...
	I0920 22:20:29.345646   64930 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:20:29.353710   64930 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.359468   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.359507   64930 pod_ready.go:82] duration metric: took 5.763208ms for pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.359518   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "coredns-7c65d6cfc9-nl65c" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.359529   64930 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.365960   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.365994   64930 pod_ready.go:82] duration metric: took 6.454066ms for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.366006   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.366023   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.372228   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.372258   64930 pod_ready.go:82] duration metric: took 6.225774ms for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.372270   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.372278   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.449937   64930 pod_ready.go:98] node "default-k8s-diff-port-799783" hosting pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.449964   64930 pod_ready.go:82] duration metric: took 77.677417ms for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	E0920 22:20:29.449975   64930 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-799783" hosting pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-799783" has status "Ready":"False"
	I0920 22:20:29.449981   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mqkfs" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.849933   64930 pod_ready.go:93] pod "kube-proxy-mqkfs" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:29.849957   64930 pod_ready.go:82] duration metric: took 399.969442ms for pod "kube-proxy-mqkfs" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:29.849971   64930 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:27.500734   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:27.501210   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:27.501250   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:27.501187   66497 retry.go:31] will retry after 2.779302945s: waiting for machine to come up
	I0920 22:20:30.282179   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:30.282663   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | unable to find current IP address of domain old-k8s-version-593412 in network mk-old-k8s-version-593412
	I0920 22:20:30.282693   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | I0920 22:20:30.282632   66497 retry.go:31] will retry after 3.57604454s: waiting for machine to come up
	I0920 22:20:29.930268   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:31.930604   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:35.104095   64076 start.go:364] duration metric: took 54.263062742s to acquireMachinesLock for "no-preload-032884"
	I0920 22:20:35.104165   64076 start.go:96] Skipping create...Using existing machine configuration
	I0920 22:20:35.104176   64076 fix.go:54] fixHost starting: 
	I0920 22:20:35.104530   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:20:35.104559   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:20:35.121905   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38139
	I0920 22:20:35.122295   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:20:35.122754   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:20:35.122775   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:20:35.123184   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:20:35.123393   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:35.123553   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:20:35.125092   64076 fix.go:112] recreateIfNeeded on no-preload-032884: state=Stopped err=<nil>
	I0920 22:20:35.125113   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	W0920 22:20:35.125267   64076 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 22:20:35.127162   64076 out.go:177] * Restarting existing kvm2 VM for "no-preload-032884" ...
	I0920 22:20:31.857961   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:34.358409   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:33.859962   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860480   65164 main.go:141] libmachine: (old-k8s-version-593412) Found IP for machine: 192.168.61.2
	I0920 22:20:33.860510   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has current primary IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.860519   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserving static IP address...
	I0920 22:20:33.860912   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.860939   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | skip adding static IP to network mk-old-k8s-version-593412 - found existing host DHCP lease matching {name: "old-k8s-version-593412", mac: "52:54:00:7f:66:3d", ip: "192.168.61.2"}
	I0920 22:20:33.860953   65164 main.go:141] libmachine: (old-k8s-version-593412) Reserved static IP address: 192.168.61.2
	I0920 22:20:33.860970   65164 main.go:141] libmachine: (old-k8s-version-593412) Waiting for SSH to be available...
	I0920 22:20:33.860982   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Getting to WaitForSSH function...
	I0920 22:20:33.863316   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863665   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.863690   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.863814   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH client type: external
	I0920 22:20:33.863852   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa (-rw-------)
	I0920 22:20:33.863884   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.2 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:33.863898   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | About to run SSH command:
	I0920 22:20:33.863916   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | exit 0
	I0920 22:20:33.987115   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:33.987448   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetConfigRaw
	I0920 22:20:33.988170   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:33.990787   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991200   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.991232   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.991450   65164 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/config.json ...
	I0920 22:20:33.991659   65164 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:33.991677   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:33.991880   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:33.994313   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994683   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:33.994715   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:33.994911   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:33.995094   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995194   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:33.995344   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:33.995459   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:33.995641   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:33.995651   65164 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:34.104874   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:34.104900   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105125   65164 buildroot.go:166] provisioning hostname "old-k8s-version-593412"
	I0920 22:20:34.105140   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.105271   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.108213   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108564   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.108595   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.108754   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.108946   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109088   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.109218   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.109384   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.109607   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.109624   65164 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-593412 && echo "old-k8s-version-593412" | sudo tee /etc/hostname
	I0920 22:20:34.236515   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-593412
	
	I0920 22:20:34.236547   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.239440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.239825   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.239983   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.240211   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240372   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.240534   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.240753   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.240978   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.241002   65164 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-593412' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-593412/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-593412' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:34.364324   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:34.364351   65164 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:34.364401   65164 buildroot.go:174] setting up certificates
	I0920 22:20:34.364413   65164 provision.go:84] configureAuth start
	I0920 22:20:34.364426   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetMachineName
	I0920 22:20:34.364641   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:34.367650   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368065   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.368123   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.368260   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.371062   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.371470   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.371621   65164 provision.go:143] copyHostCerts
	I0920 22:20:34.371684   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:34.371699   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:34.371756   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:34.371859   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:34.371869   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:34.371889   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:34.371941   65164 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:34.371949   65164 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:34.371966   65164 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:34.372024   65164 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-593412 san=[127.0.0.1 192.168.61.2 localhost minikube old-k8s-version-593412]
	I0920 22:20:34.456453   65164 provision.go:177] copyRemoteCerts
	I0920 22:20:34.456512   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:34.456538   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.459471   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459810   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.459871   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.459996   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.460225   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.460390   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.460538   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.547105   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:34.576231   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 22:20:34.603240   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 22:20:34.629295   65164 provision.go:87] duration metric: took 264.869688ms to configureAuth
	I0920 22:20:34.629320   65164 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:34.629521   65164 config.go:182] Loaded profile config "old-k8s-version-593412": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0920 22:20:34.629601   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.632577   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633006   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.633035   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.633191   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.633391   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633552   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.633682   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.633865   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:34.634029   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:34.634043   65164 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:34.870369   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:34.870393   65164 machine.go:96] duration metric: took 878.72152ms to provisionDockerMachine
	I0920 22:20:34.870404   65164 start.go:293] postStartSetup for "old-k8s-version-593412" (driver="kvm2")
	I0920 22:20:34.870414   65164 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:34.870446   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:34.870739   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:34.870770   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:34.873626   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874069   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:34.874098   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:34.874262   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:34.874477   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:34.874607   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:34.874731   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:34.958510   65164 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:34.962678   65164 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:34.962700   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:34.962769   65164 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:34.962873   65164 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:34.963010   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:34.973114   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:34.997799   65164 start.go:296] duration metric: took 127.379618ms for postStartSetup
	I0920 22:20:34.997837   65164 fix.go:56] duration metric: took 19.87358027s for fixHost
	I0920 22:20:34.997861   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.000420   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000761   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.000786   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.000953   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.001142   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001332   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.001492   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.001694   65164 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:35.001900   65164 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.61.2 22 <nil> <nil>}
	I0920 22:20:35.001914   65164 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:35.103956   65164 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870835.076325315
	
	I0920 22:20:35.103980   65164 fix.go:216] guest clock: 1726870835.076325315
	I0920 22:20:35.103991   65164 fix.go:229] Guest: 2024-09-20 22:20:35.076325315 +0000 UTC Remote: 2024-09-20 22:20:34.997841315 +0000 UTC m=+209.171100575 (delta=78.484ms)
	I0920 22:20:35.104021   65164 fix.go:200] guest clock delta is within tolerance: 78.484ms
	I0920 22:20:35.104031   65164 start.go:83] releasing machines lock for "old-k8s-version-593412", held for 19.979809523s
	I0920 22:20:35.104060   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.104327   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:35.107075   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107426   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.107463   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.107619   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108087   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108221   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .DriverName
	I0920 22:20:35.108323   65164 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:35.108377   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.108430   65164 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:35.108455   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHHostname
	I0920 22:20:35.111005   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111107   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111377   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111413   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111440   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:35.111456   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:35.111584   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111638   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHPort
	I0920 22:20:35.111760   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111795   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHKeyPath
	I0920 22:20:35.111939   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.111943   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetSSHUsername
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.112104   65164 sshutil.go:53] new ssh client: &{IP:192.168.61.2 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/old-k8s-version-593412/id_rsa Username:docker}
	I0920 22:20:35.189592   65164 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:35.213527   65164 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:35.364081   65164 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:35.370551   65164 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:35.370621   65164 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:35.388107   65164 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:35.388135   65164 start.go:495] detecting cgroup driver to use...
	I0920 22:20:35.388206   65164 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:35.410531   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:35.425842   65164 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:35.425902   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:35.441753   65164 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:35.457160   65164 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:35.578452   65164 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:35.754567   65164 docker.go:233] disabling docker service ...
	I0920 22:20:35.754653   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:35.772241   65164 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:35.788874   65164 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:35.911696   65164 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:36.050305   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:36.065694   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:36.085619   65164 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0920 22:20:36.085697   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.096527   65164 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:36.096599   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.108150   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.121282   65164 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:36.133335   65164 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:36.145549   65164 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:36.156370   65164 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:36.156429   65164 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:36.174821   65164 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:36.188579   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:36.355272   65164 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:36.460222   65164 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:36.460280   65164 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:36.466413   65164 start.go:563] Will wait 60s for crictl version
	I0920 22:20:36.466470   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:36.476263   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:36.519969   65164 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:36.520061   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.551914   65164 ssh_runner.go:195] Run: crio --version
	I0920 22:20:36.584210   65164 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0920 22:20:33.931343   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:36.430359   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:35.128483   64076 main.go:141] libmachine: (no-preload-032884) Calling .Start
	I0920 22:20:35.128679   64076 main.go:141] libmachine: (no-preload-032884) Ensuring networks are active...
	I0920 22:20:35.129407   64076 main.go:141] libmachine: (no-preload-032884) Ensuring network default is active
	I0920 22:20:35.129723   64076 main.go:141] libmachine: (no-preload-032884) Ensuring network mk-no-preload-032884 is active
	I0920 22:20:35.130101   64076 main.go:141] libmachine: (no-preload-032884) Getting domain xml...
	I0920 22:20:35.130878   64076 main.go:141] libmachine: (no-preload-032884) Creating domain...
	I0920 22:20:36.466500   64076 main.go:141] libmachine: (no-preload-032884) Waiting to get IP...
	I0920 22:20:36.467609   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:36.468084   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:36.468162   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:36.468053   66666 retry.go:31] will retry after 296.142224ms: waiting for machine to come up
	I0920 22:20:36.765402   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:36.765985   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:36.766019   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:36.765907   66666 retry.go:31] will retry after 264.44324ms: waiting for machine to come up
	I0920 22:20:37.032449   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.032913   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.032983   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.032871   66666 retry.go:31] will retry after 366.351809ms: waiting for machine to come up
	I0920 22:20:37.401339   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.401912   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.401942   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.401867   66666 retry.go:31] will retry after 447.432413ms: waiting for machine to come up
	I0920 22:20:37.851659   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:37.852267   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:37.852292   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:37.852177   66666 retry.go:31] will retry after 547.737945ms: waiting for machine to come up
	I0920 22:20:36.858146   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:38.863304   64930 pod_ready.go:103] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:36.585309   65164 main.go:141] libmachine: (old-k8s-version-593412) Calling .GetIP
	I0920 22:20:36.588169   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588579   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7f:66:3d", ip: ""} in network mk-old-k8s-version-593412: {Iface:virbr3 ExpiryTime:2024-09-20 23:20:27 +0000 UTC Type:0 Mac:52:54:00:7f:66:3d Iaid: IPaddr:192.168.61.2 Prefix:24 Hostname:old-k8s-version-593412 Clientid:01:52:54:00:7f:66:3d}
	I0920 22:20:36.588608   65164 main.go:141] libmachine: (old-k8s-version-593412) DBG | domain old-k8s-version-593412 has defined IP address 192.168.61.2 and MAC address 52:54:00:7f:66:3d in network mk-old-k8s-version-593412
	I0920 22:20:36.588830   65164 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:36.593202   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:36.608192   65164 kubeadm.go:883] updating cluster {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:36.608325   65164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 22:20:36.608384   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:36.667481   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:36.667551   65164 ssh_runner.go:195] Run: which lz4
	I0920 22:20:36.672136   65164 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0920 22:20:36.677043   65164 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0920 22:20:36.677075   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0920 22:20:38.446678   65164 crio.go:462] duration metric: took 1.774616281s to copy over tarball
	I0920 22:20:38.446757   65164 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0920 22:20:38.433307   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:40.932109   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:38.401406   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:38.401845   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:38.401872   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:38.401813   66666 retry.go:31] will retry after 768.751475ms: waiting for machine to come up
	I0920 22:20:39.171763   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:39.172262   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:39.172306   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:39.172196   66666 retry.go:31] will retry after 743.411926ms: waiting for machine to come up
	I0920 22:20:39.917169   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:39.917764   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:39.917788   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:39.917713   66666 retry.go:31] will retry after 962.106927ms: waiting for machine to come up
	I0920 22:20:40.881256   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:40.881826   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:40.881853   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:40.881761   66666 retry.go:31] will retry after 1.153833019s: waiting for machine to come up
	I0920 22:20:42.036986   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:42.037484   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:42.037507   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:42.037435   66666 retry.go:31] will retry after 2.265359794s: waiting for machine to come up
	I0920 22:20:41.358624   64930 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:20:41.358656   64930 pod_ready.go:82] duration metric: took 11.508676383s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:41.358675   64930 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" ...
	I0920 22:20:43.367487   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:41.624103   65164 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.177314792s)
	I0920 22:20:41.624140   65164 crio.go:469] duration metric: took 3.177430277s to extract the tarball
	I0920 22:20:41.624150   65164 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0920 22:20:41.682195   65164 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:41.728925   65164 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0920 22:20:41.728954   65164 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:20:41.729037   65164 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.729082   65164 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0920 22:20:41.729130   65164 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.729039   65164 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.729053   65164 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.729064   65164 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.729076   65164 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731145   65164 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:41.731168   65164 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0920 22:20:41.731417   65164 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.731618   65164 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.731649   65164 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.731764   65164 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.731827   65164 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.731883   65164 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.894612   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:41.895771   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:41.903674   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:41.909467   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:41.910830   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0920 22:20:41.926607   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:41.943477   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0920 22:20:42.011191   65164 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0920 22:20:42.011242   65164 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.011318   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.050009   65164 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0920 22:20:42.050293   65164 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.050375   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.093205   65164 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0920 22:20:42.093259   65164 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.093317   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106609   65164 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0920 22:20:42.106652   65164 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.106701   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106747   65164 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0920 22:20:42.106832   65164 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.106881   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.106699   65164 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0920 22:20:42.106936   65164 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.107028   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117272   65164 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0920 22:20:42.117381   65164 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0920 22:20:42.117404   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.117463   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.117496   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.117427   65164 ssh_runner.go:195] Run: which crictl
	I0920 22:20:42.117331   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.117556   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.117617   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.261915   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.262025   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.261930   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.262132   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.262186   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.262224   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.262299   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.416018   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0920 22:20:42.416073   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.416134   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0920 22:20:42.437099   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0920 22:20:42.437281   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0920 22:20:42.437389   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0920 22:20:42.437543   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0920 22:20:42.498530   65164 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:42.617564   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0920 22:20:42.617565   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0920 22:20:42.617665   65164 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0920 22:20:42.617767   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0920 22:20:42.629455   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0920 22:20:42.629553   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0920 22:20:42.629638   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0920 22:20:42.769494   65164 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0920 22:20:42.769571   65164 cache_images.go:92] duration metric: took 1.040600304s to LoadCachedImages
	W0920 22:20:42.769670   65164 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I0920 22:20:42.769690   65164 kubeadm.go:934] updating node { 192.168.61.2 8443 v1.20.0 crio true true} ...
	I0920 22:20:42.769834   65164 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-593412 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:20:42.769930   65164 ssh_runner.go:195] Run: crio config
	I0920 22:20:42.826657   65164 cni.go:84] Creating CNI manager for ""
	I0920 22:20:42.826684   65164 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:20:42.826695   65164 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:20:42.826713   65164 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-593412 NodeName:old-k8s-version-593412 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 22:20:42.826912   65164 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-593412"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:20:42.827017   65164 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 22:20:42.836906   65164 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:20:42.836972   65164 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:20:42.846776   65164 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (428 bytes)
	I0920 22:20:42.867946   65164 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:20:42.888646   65164 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I0920 22:20:42.908587   65164 ssh_runner.go:195] Run: grep 192.168.61.2	control-plane.minikube.internal$ /etc/hosts
	I0920 22:20:42.913022   65164 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:42.927492   65164 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:43.074355   65164 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:20:43.093712   65164 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412 for IP: 192.168.61.2
	I0920 22:20:43.093747   65164 certs.go:194] generating shared ca certs ...
	I0920 22:20:43.093768   65164 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.093938   65164 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:20:43.094003   65164 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:20:43.094016   65164 certs.go:256] generating profile certs ...
	I0920 22:20:43.094116   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.key
	I0920 22:20:43.094174   65164 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key.2074c1da
	I0920 22:20:43.094287   65164 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key
	I0920 22:20:43.094440   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:20:43.094482   65164 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:20:43.094503   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:20:43.094532   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:20:43.094556   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:20:43.094575   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:20:43.094615   65164 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:43.095441   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:20:43.139315   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:20:43.184809   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:20:43.224959   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:20:43.261692   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 22:20:43.304910   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 22:20:43.350211   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:20:43.383294   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:20:43.419400   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:20:43.450334   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:20:43.483749   65164 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:20:43.514957   65164 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:20:43.539734   65164 ssh_runner.go:195] Run: openssl version
	I0920 22:20:43.548836   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:20:43.564773   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571841   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.571904   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:20:43.580432   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:20:43.596782   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:20:43.613747   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620172   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.620241   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:20:43.628316   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:20:43.644084   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:20:43.656062   65164 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661620   65164 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.661687   65164 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:20:43.668857   65164 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:20:43.681747   65164 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:20:43.686842   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:20:43.694119   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:20:43.701308   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:20:43.710152   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:20:43.718080   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:20:43.725573   65164 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:20:43.733563   65164 kubeadm.go:392] StartCluster: {Name:old-k8s-version-593412 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-593412 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:20:43.733676   65164 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:20:43.733753   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.779321   65164 cri.go:89] found id: ""
	I0920 22:20:43.779414   65164 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:20:43.792290   65164 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:20:43.792308   65164 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:20:43.792354   65164 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:20:43.804754   65164 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:20:43.805793   65164 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-593412" does not appear in /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:20:43.807347   65164 kubeconfig.go:62] /home/jenkins/minikube-integration/19672-9614/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-593412" cluster setting kubeconfig missing "old-k8s-version-593412" context setting]
	I0920 22:20:43.809671   65164 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:20:43.812323   65164 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:20:43.824013   65164 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.2
	I0920 22:20:43.824057   65164 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:20:43.824072   65164 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:20:43.824154   65164 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:20:43.866570   65164 cri.go:89] found id: ""
	I0920 22:20:43.866647   65164 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:20:43.885113   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:20:43.895941   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:20:43.895964   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:20:43.896070   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:20:43.906350   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:20:43.906427   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:20:43.917506   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:20:43.927802   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:20:43.927873   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:20:43.938454   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.949394   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:20:43.949459   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:20:43.960239   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:20:43.971275   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:20:43.971343   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:20:43.981773   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:20:43.992415   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:44.128979   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.239452   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.110430813s)
	I0920 22:20:45.239495   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.508589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.628021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:20:45.731122   65164 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:20:45.731219   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:42.960716   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:45.431334   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:47.431981   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:44.306025   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:44.306388   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:44.306418   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:44.306362   66666 retry.go:31] will retry after 1.811972294s: waiting for machine to come up
	I0920 22:20:46.119688   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:46.120257   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:46.120281   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:46.120206   66666 retry.go:31] will retry after 2.353824362s: waiting for machine to come up
	I0920 22:20:45.865875   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:48.366477   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:46.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:46.732091   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.232177   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:47.731967   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.231957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:48.731560   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.232211   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.731851   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:50.732174   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:49.930717   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:51.931624   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:48.476137   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:48.476630   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:48.476659   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:48.476585   66666 retry.go:31] will retry after 3.920442887s: waiting for machine to come up
	I0920 22:20:52.400058   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:52.400449   64076 main.go:141] libmachine: (no-preload-032884) DBG | unable to find current IP address of domain no-preload-032884 in network mk-no-preload-032884
	I0920 22:20:52.400471   64076 main.go:141] libmachine: (no-preload-032884) DBG | I0920 22:20:52.400426   66666 retry.go:31] will retry after 3.798500794s: waiting for machine to come up
	I0920 22:20:50.865475   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:53.366485   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:51.231352   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:51.731345   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.231745   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:52.731622   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.231749   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:53.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.231457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.732036   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.232322   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:55.732142   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:54.430801   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.931991   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.200060   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.200518   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has current primary IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.200541   64076 main.go:141] libmachine: (no-preload-032884) Found IP for machine: 192.168.72.19
	I0920 22:20:56.200550   64076 main.go:141] libmachine: (no-preload-032884) Reserving static IP address...
	I0920 22:20:56.200939   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "no-preload-032884", mac: "52:54:00:a6:6f:3c", ip: "192.168.72.19"} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.200983   64076 main.go:141] libmachine: (no-preload-032884) DBG | skip adding static IP to network mk-no-preload-032884 - found existing host DHCP lease matching {name: "no-preload-032884", mac: "52:54:00:a6:6f:3c", ip: "192.168.72.19"}
	I0920 22:20:56.200999   64076 main.go:141] libmachine: (no-preload-032884) Reserved static IP address: 192.168.72.19
	I0920 22:20:56.201014   64076 main.go:141] libmachine: (no-preload-032884) Waiting for SSH to be available...
	I0920 22:20:56.201042   64076 main.go:141] libmachine: (no-preload-032884) DBG | Getting to WaitForSSH function...
	I0920 22:20:56.203351   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.203663   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.203695   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.203770   64076 main.go:141] libmachine: (no-preload-032884) DBG | Using SSH client type: external
	I0920 22:20:56.203790   64076 main.go:141] libmachine: (no-preload-032884) DBG | Using SSH private key: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa (-rw-------)
	I0920 22:20:56.203820   64076 main.go:141] libmachine: (no-preload-032884) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.19 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0920 22:20:56.203830   64076 main.go:141] libmachine: (no-preload-032884) DBG | About to run SSH command:
	I0920 22:20:56.203844   64076 main.go:141] libmachine: (no-preload-032884) DBG | exit 0
	I0920 22:20:56.331498   64076 main.go:141] libmachine: (no-preload-032884) DBG | SSH cmd err, output: <nil>: 
	I0920 22:20:56.331905   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetConfigRaw
	I0920 22:20:56.333047   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:56.335830   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.336198   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.336230   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.336462   64076 profile.go:143] Saving config to /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/config.json ...
	I0920 22:20:56.336687   64076 machine.go:93] provisionDockerMachine start ...
	I0920 22:20:56.336708   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:56.336901   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.339784   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.340171   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.340198   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.340373   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.340598   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.340754   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.340901   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.341056   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.341274   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.341295   64076 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 22:20:56.451538   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0920 22:20:56.451567   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.451783   64076 buildroot.go:166] provisioning hostname "no-preload-032884"
	I0920 22:20:56.451813   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.451994   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.454378   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.454747   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.454777   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.454905   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.455093   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.455216   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.455344   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.455549   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.455768   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.455791   64076 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-032884 && echo "no-preload-032884" | sudo tee /etc/hostname
	I0920 22:20:56.577189   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-032884
	
	I0920 22:20:56.577218   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.580006   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.580396   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.580419   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.580637   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.580821   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.580981   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.581093   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.581236   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:56.581447   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:56.581472   64076 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-032884' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-032884/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-032884' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 22:20:56.697182   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 22:20:56.697211   64076 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19672-9614/.minikube CaCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19672-9614/.minikube}
	I0920 22:20:56.697237   64076 buildroot.go:174] setting up certificates
	I0920 22:20:56.697250   64076 provision.go:84] configureAuth start
	I0920 22:20:56.697268   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetMachineName
	I0920 22:20:56.697530   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:56.700284   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.700638   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.700669   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.700828   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.703088   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.703460   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.703490   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.703557   64076 provision.go:143] copyHostCerts
	I0920 22:20:56.703638   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem, removing ...
	I0920 22:20:56.703658   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem
	I0920 22:20:56.703729   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/ca.pem (1078 bytes)
	I0920 22:20:56.703846   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem, removing ...
	I0920 22:20:56.703856   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem
	I0920 22:20:56.703888   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/cert.pem (1123 bytes)
	I0920 22:20:56.703976   64076 exec_runner.go:144] found /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem, removing ...
	I0920 22:20:56.703986   64076 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem
	I0920 22:20:56.704014   64076 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19672-9614/.minikube/key.pem (1679 bytes)
	I0920 22:20:56.704100   64076 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem org=jenkins.no-preload-032884 san=[127.0.0.1 192.168.72.19 localhost minikube no-preload-032884]
	I0920 22:20:56.866984   64076 provision.go:177] copyRemoteCerts
	I0920 22:20:56.867061   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 22:20:56.867092   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:56.869468   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.869727   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:56.869753   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:56.869967   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:56.870146   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:56.870335   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:56.870460   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:56.957699   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 22:20:56.983450   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 22:20:57.010074   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 22:20:57.036805   64076 provision.go:87] duration metric: took 339.53481ms to configureAuth
	I0920 22:20:57.036831   64076 buildroot.go:189] setting minikube options for container-runtime
	I0920 22:20:57.036994   64076 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:20:57.037101   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.040022   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.040443   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.040474   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.040688   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.040870   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.041024   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.041155   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.041351   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:57.041575   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:57.041597   64076 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0920 22:20:57.264274   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0920 22:20:57.264302   64076 machine.go:96] duration metric: took 927.599257ms to provisionDockerMachine
	I0920 22:20:57.264315   64076 start.go:293] postStartSetup for "no-preload-032884" (driver="kvm2")
	I0920 22:20:57.264328   64076 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 22:20:57.264353   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.264656   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 22:20:57.264689   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.267783   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.268229   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.268267   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.268455   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.268639   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.268786   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.268927   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.353832   64076 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 22:20:57.358202   64076 info.go:137] Remote host: Buildroot 2023.02.9
	I0920 22:20:57.358226   64076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/addons for local assets ...
	I0920 22:20:57.358289   64076 filesync.go:126] Scanning /home/jenkins/minikube-integration/19672-9614/.minikube/files for local assets ...
	I0920 22:20:57.358364   64076 filesync.go:149] local asset: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem -> 167852.pem in /etc/ssl/certs
	I0920 22:20:57.358445   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 22:20:57.368387   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:20:57.393307   64076 start.go:296] duration metric: took 128.976035ms for postStartSetup
	I0920 22:20:57.393353   64076 fix.go:56] duration metric: took 22.289177312s for fixHost
	I0920 22:20:57.393377   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.396200   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.396566   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.396586   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.396760   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.396953   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.397106   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.397220   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.397357   64076 main.go:141] libmachine: Using SSH client type: native
	I0920 22:20:57.397563   64076 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 192.168.72.19 22 <nil> <nil>}
	I0920 22:20:57.397578   64076 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0920 22:20:57.508079   64076 main.go:141] libmachine: SSH cmd err, output: <nil>: 1726870857.462716254
	
	I0920 22:20:57.508102   64076 fix.go:216] guest clock: 1726870857.462716254
	I0920 22:20:57.508111   64076 fix.go:229] Guest: 2024-09-20 22:20:57.462716254 +0000 UTC Remote: 2024-09-20 22:20:57.393358015 +0000 UTC m=+359.143310428 (delta=69.358239ms)
	I0920 22:20:57.508136   64076 fix.go:200] guest clock delta is within tolerance: 69.358239ms
	I0920 22:20:57.508143   64076 start.go:83] releasing machines lock for "no-preload-032884", held for 22.404002843s
	I0920 22:20:57.508179   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.508481   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:57.511316   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.511660   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.511685   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.511832   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512291   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512438   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:20:57.512499   64076 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 22:20:57.512558   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.512687   64076 ssh_runner.go:195] Run: cat /version.json
	I0920 22:20:57.512714   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:20:57.515494   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515674   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515884   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.515921   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.515962   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:57.516002   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:57.516189   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.516287   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:20:57.516370   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.516459   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:20:57.516509   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.516578   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:20:57.516639   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.516667   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:20:57.596100   64076 ssh_runner.go:195] Run: systemctl --version
	I0920 22:20:57.621437   64076 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0920 22:20:57.765920   64076 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0920 22:20:57.772259   64076 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0920 22:20:57.772330   64076 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 22:20:57.790230   64076 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 22:20:57.790257   64076 start.go:495] detecting cgroup driver to use...
	I0920 22:20:57.790341   64076 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0920 22:20:57.808916   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0920 22:20:57.824120   64076 docker.go:217] disabling cri-docker service (if available) ...
	I0920 22:20:57.824175   64076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 22:20:57.838655   64076 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 22:20:57.853621   64076 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 22:20:57.966670   64076 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 22:20:58.108386   64076 docker.go:233] disabling docker service ...
	I0920 22:20:58.108456   64076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 22:20:58.124306   64076 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 22:20:58.138341   64076 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 22:20:58.292771   64076 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 22:20:58.415026   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 22:20:58.431905   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 22:20:58.451166   64076 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0920 22:20:58.451227   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.462234   64076 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0920 22:20:58.462319   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.473548   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.484592   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.495498   64076 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 22:20:58.506876   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.517776   64076 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.536223   64076 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0920 22:20:58.548104   64076 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 22:20:58.558182   64076 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0920 22:20:58.558258   64076 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0920 22:20:58.573018   64076 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 22:20:58.584883   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:20:58.715520   64076 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0920 22:20:58.810495   64076 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0920 22:20:58.810565   64076 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0920 22:20:58.815745   64076 start.go:563] Will wait 60s for crictl version
	I0920 22:20:58.815808   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:58.819904   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 22:20:58.866227   64076 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0920 22:20:58.866299   64076 ssh_runner.go:195] Run: crio --version
	I0920 22:20:58.895897   64076 ssh_runner.go:195] Run: crio --version
	I0920 22:20:58.930268   64076 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I0920 22:20:55.865946   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:58.365557   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:56.231551   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:56.732184   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.231519   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:57.732306   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.231766   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:58.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.232250   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.731295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:00.732178   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:20:59.431393   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:01.930337   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:20:58.931424   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetIP
	I0920 22:20:58.934010   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:58.934261   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:20:58.934296   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:20:58.934479   64076 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0920 22:20:58.938795   64076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:20:58.952249   64076 kubeadm.go:883] updating cluster {Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 22:20:58.952359   64076 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 22:20:58.952390   64076 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 22:20:58.990874   64076 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0920 22:20:58.990895   64076 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0920 22:20:58.991004   64076 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:58.991004   64076 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:58.991042   64076 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:58.991051   64076 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0920 22:20:58.991027   64076 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:58.991070   64076 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:58.991094   64076 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:58.991320   64076 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:58.992441   64076 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:58.992446   64076 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:58.992445   64076 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:58.992529   64076 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:58.992537   64076 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:58.992888   64076 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.160630   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.162022   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.175945   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.176874   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.206536   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.221391   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.233905   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0920 22:20:59.279521   64076 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0920 22:20:59.279570   64076 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.279592   64076 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0920 22:20:59.279618   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.279624   64076 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.279667   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.320755   64076 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0920 22:20:59.320802   64076 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.320850   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.321041   64076 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0920 22:20:59.321076   64076 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.321121   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.339294   64076 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0920 22:20:59.339346   64076 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.339406   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.368217   64076 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0920 22:20:59.368271   64076 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.368321   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.462251   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.462299   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.462353   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.462395   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.462450   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.462477   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.603415   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.603435   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.603559   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.603611   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.603653   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.603690   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.740308   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0920 22:20:59.740311   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0920 22:20:59.740391   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0920 22:20:59.756577   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0920 22:20:59.756670   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0920 22:20:59.756711   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0920 22:20:59.825056   64076 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:59.886185   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0920 22:20:59.886247   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0920 22:20:59.886320   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.886335   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:20:59.886200   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0920 22:20:59.886437   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:20:59.905891   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0920 22:20:59.905934   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0920 22:20:59.906012   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:20:59.906032   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0920 22:20:59.906037   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:20:59.906105   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:20:59.944041   64076 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0920 22:20:59.944085   64076 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:20:59.944092   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I0920 22:20:59.944109   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.944115   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I0920 22:20:59.944124   64076 ssh_runner.go:195] Run: which crictl
	I0920 22:20:59.944156   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0920 22:20:59.944192   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I0920 22:20:59.944232   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I0920 22:20:59.944290   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I0920 22:20:59.944305   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I0920 22:21:02.634084   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.689904652s)
	I0920 22:21:02.634115   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0920 22:21:02.634134   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:21:02.634156   64076 ssh_runner.go:235] Completed: which crictl: (2.690013191s)
	I0920 22:21:02.634170   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I0920 22:21:02.634206   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:00.365987   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:02.366056   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:04.865138   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:01.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:01.731557   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.232222   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:02.732161   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.231472   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.732256   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.231599   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:04.731956   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.231815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:05.731307   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:03.931240   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:05.932280   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:04.514463   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (1.880270825s)
	I0920 22:21:04.514501   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0920 22:21:04.514506   64076 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.880277859s)
	I0920 22:21:04.514634   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:04.514530   64076 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:21:04.514708   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I0920 22:21:07.903361   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.388614598s)
	I0920 22:21:07.903407   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0920 22:21:07.903447   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:21:07.903457   64076 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.388799941s)
	I0920 22:21:07.903508   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0920 22:21:07.903525   64076 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:21:07.948935   64076 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0920 22:21:07.949066   64076 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:06.867117   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:09.365054   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:06.231846   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:06.732185   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.232008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:07.732202   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.232026   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.732223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.231619   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:09.732182   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.232303   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:10.731940   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:08.432619   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:10.432916   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:09.377740   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.47420389s)
	I0920 22:21:09.377777   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0920 22:21:09.377799   64076 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:21:09.377799   64076 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.428713149s)
	I0920 22:21:09.377821   64076 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0920 22:21:09.377855   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I0920 22:21:11.449954   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.072068983s)
	I0920 22:21:11.449989   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0920 22:21:11.450021   64076 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:21:11.450070   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0920 22:21:11.366565   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:13.865931   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:11.231381   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:11.731874   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.232048   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.732168   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.232175   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:13.732162   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.231648   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:14.732176   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.232078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:15.732039   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:12.931619   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:14.931872   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:17.431853   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:13.420242   64076 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.970150355s)
	I0920 22:21:13.420269   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0920 22:21:13.420292   64076 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:13.420327   64076 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0920 22:21:14.070854   64076 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19672-9614/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0920 22:21:14.070905   64076 cache_images.go:123] Successfully loaded all cached images
	I0920 22:21:14.070914   64076 cache_images.go:92] duration metric: took 15.080004062s to LoadCachedImages
	I0920 22:21:14.070930   64076 kubeadm.go:934] updating node { 192.168.72.19 8443 v1.31.1 crio true true} ...
	I0920 22:21:14.071097   64076 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-032884 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.19
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 22:21:14.071168   64076 ssh_runner.go:195] Run: crio config
	I0920 22:21:14.124217   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:21:14.124245   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:21:14.124254   64076 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 22:21:14.124274   64076 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.19 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-032884 NodeName:no-preload-032884 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.19"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.19 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 22:21:14.124421   64076 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.19
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-032884"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.19
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.19"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 22:21:14.124480   64076 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 22:21:14.136928   64076 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 22:21:14.136997   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 22:21:14.147175   64076 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0920 22:21:14.164546   64076 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 22:21:14.181818   64076 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0920 22:21:14.199098   64076 ssh_runner.go:195] Run: grep 192.168.72.19	control-plane.minikube.internal$ /etc/hosts
	I0920 22:21:14.203042   64076 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.19	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 22:21:14.216620   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:21:14.344665   64076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:21:14.365604   64076 certs.go:68] Setting up /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884 for IP: 192.168.72.19
	I0920 22:21:14.365629   64076 certs.go:194] generating shared ca certs ...
	I0920 22:21:14.365649   64076 certs.go:226] acquiring lock for ca certs: {Name:mk021d7d1c007b65983ab506ab378319a6623d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:21:14.365827   64076 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key
	I0920 22:21:14.365885   64076 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key
	I0920 22:21:14.365899   64076 certs.go:256] generating profile certs ...
	I0920 22:21:14.366012   64076 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.key
	I0920 22:21:14.366104   64076 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.key.bbadb61c
	I0920 22:21:14.366162   64076 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.key
	I0920 22:21:14.366353   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem (1338 bytes)
	W0920 22:21:14.366400   64076 certs.go:480] ignoring /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785_empty.pem, impossibly tiny 0 bytes
	I0920 22:21:14.366414   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 22:21:14.366450   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/ca.pem (1078 bytes)
	I0920 22:21:14.366489   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/cert.pem (1123 bytes)
	I0920 22:21:14.366519   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/certs/key.pem (1679 bytes)
	I0920 22:21:14.366575   64076 certs.go:484] found cert: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem (1708 bytes)
	I0920 22:21:14.367591   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 22:21:14.427412   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 22:21:14.462466   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 22:21:14.501761   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 22:21:14.536382   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0920 22:21:14.571781   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 22:21:14.598326   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 22:21:14.623017   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 22:21:14.648918   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/ssl/certs/167852.pem --> /usr/share/ca-certificates/167852.pem (1708 bytes)
	I0920 22:21:14.675898   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 22:21:14.701225   64076 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19672-9614/.minikube/certs/16785.pem --> /usr/share/ca-certificates/16785.pem (1338 bytes)
	I0920 22:21:14.726996   64076 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 22:21:14.748808   64076 ssh_runner.go:195] Run: openssl version
	I0920 22:21:14.755400   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/167852.pem && ln -fs /usr/share/ca-certificates/167852.pem /etc/ssl/certs/167852.pem"
	I0920 22:21:14.768456   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.774027   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 21:05 /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.774096   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/167852.pem
	I0920 22:21:14.780817   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/167852.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 22:21:14.796292   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 22:21:14.810475   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.815697   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 20:48 /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.815773   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 22:21:14.822267   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 22:21:14.834264   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16785.pem && ln -fs /usr/share/ca-certificates/16785.pem /etc/ssl/certs/16785.pem"
	I0920 22:21:14.846103   64076 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.850719   64076 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 21:05 /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.850783   64076 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16785.pem
	I0920 22:21:14.856898   64076 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16785.pem /etc/ssl/certs/51391683.0"
	I0920 22:21:14.868699   64076 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 22:21:14.873403   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 22:21:14.879636   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 22:21:14.885839   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 22:21:14.892035   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 22:21:14.897881   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 22:21:14.903783   64076 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 22:21:14.909649   64076 kubeadm.go:392] StartCluster: {Name:no-preload-032884 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-032884 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 22:21:14.909737   64076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0920 22:21:14.909780   64076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:21:14.953105   64076 cri.go:89] found id: ""
	I0920 22:21:14.953180   64076 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 22:21:14.963403   64076 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 22:21:14.963423   64076 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 22:21:14.963471   64076 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 22:21:14.973488   64076 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 22:21:14.974547   64076 kubeconfig.go:125] found "no-preload-032884" server: "https://192.168.72.19:8443"
	I0920 22:21:14.976821   64076 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 22:21:14.986722   64076 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.19
	I0920 22:21:14.986757   64076 kubeadm.go:1160] stopping kube-system containers ...
	I0920 22:21:14.986770   64076 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0920 22:21:14.986842   64076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 22:21:15.023234   64076 cri.go:89] found id: ""
	I0920 22:21:15.023305   64076 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0920 22:21:15.041045   64076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:21:15.051274   64076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:21:15.051297   64076 kubeadm.go:157] found existing configuration files:
	
	I0920 22:21:15.051341   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:21:15.061647   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:21:15.061697   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:21:15.072071   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:21:15.081989   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:21:15.082043   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:21:15.092302   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:21:15.102314   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:21:15.102372   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:21:15.112160   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:21:15.121675   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:21:15.121733   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:21:15.132459   64076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:21:15.143233   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:15.261579   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:16.912557   64076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.650944403s)
	I0920 22:21:16.912593   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.142147   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.220002   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:17.345551   64076 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:21:17.345659   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.846789   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.365656   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:18.366203   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:16.231902   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:16.731672   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.232244   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:17.731640   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.231400   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.231727   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:19.731713   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.231984   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:20.732224   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.346730   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:18.385972   64076 api_server.go:72] duration metric: took 1.040420208s to wait for apiserver process to appear ...
	I0920 22:21:18.385999   64076 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:21:18.386027   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:18.386479   64076 api_server.go:269] stopped: https://192.168.72.19:8443/healthz: Get "https://192.168.72.19:8443/healthz": dial tcp 192.168.72.19:8443: connect: connection refused
	I0920 22:21:18.886429   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.873610   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.873654   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:20.873666   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.955077   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.955105   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:20.955121   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:20.968034   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0920 22:21:20.968062   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0920 22:21:21.386680   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:21.391109   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:21:21.391148   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:21:21.886726   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:21.891765   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0920 22:21:21.891802   64076 api_server.go:103] status: https://192.168.72.19:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0920 22:21:22.386330   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:21:22.394934   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 200:
	ok
	I0920 22:21:22.404568   64076 api_server.go:141] control plane version: v1.31.1
	I0920 22:21:22.404599   64076 api_server.go:131] duration metric: took 4.018587856s to wait for apiserver health ...
	I0920 22:21:22.404608   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:21:22.404614   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:21:22.406538   64076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:21:19.930612   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:21.931883   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:22.408344   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:21:22.424308   64076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:21:22.460962   64076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:21:22.477489   64076 system_pods.go:59] 8 kube-system pods found
	I0920 22:21:22.477609   64076 system_pods.go:61] "coredns-7c65d6cfc9-pdcjb" [9c28694b-a51e-416c-9abc-39e692aa2f28] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0920 22:21:22.477652   64076 system_pods.go:61] "etcd-no-preload-032884" [ee560f0c-02e1-4694-af7c-64a928e77b4d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0920 22:21:22.477685   64076 system_pods.go:61] "kube-apiserver-no-preload-032884" [9861f45c-6e90-4756-b045-2d8e3426dc9f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0920 22:21:22.477706   64076 system_pods.go:61] "kube-controller-manager-no-preload-032884" [f16002bd-2240-4645-a49b-ef9e8c0588e1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0920 22:21:22.477727   64076 system_pods.go:61] "kube-proxy-mp98d" [75b05d31-33ee-407f-80dc-360b71c17cba] Running
	I0920 22:21:22.477746   64076 system_pods.go:61] "kube-scheduler-no-preload-032884" [72404a91-c915-4f6e-9dbb-d8df10564d93] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0920 22:21:22.477772   64076 system_pods.go:61] "metrics-server-6867b74b74-x9qjw" [ae39ba1f-b2b9-42dc-9f7c-ac44997798aa] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:21:22.477792   64076 system_pods.go:61] "storage-provisioner" [cbd15398-a74b-4010-8fb1-f90e13d4f682] Running
	I0920 22:21:22.477811   64076 system_pods.go:74] duration metric: took 16.828942ms to wait for pod list to return data ...
	I0920 22:21:22.477835   64076 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:21:22.486981   64076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:21:22.487019   64076 node_conditions.go:123] node cpu capacity is 2
	I0920 22:21:22.487033   64076 node_conditions.go:105] duration metric: took 9.17624ms to run NodePressure ...
	I0920 22:21:22.487056   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0920 22:21:22.780784   64076 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0920 22:21:22.785269   64076 kubeadm.go:739] kubelet initialised
	I0920 22:21:22.785300   64076 kubeadm.go:740] duration metric: took 4.478578ms waiting for restarted kubelet to initialise ...
	I0920 22:21:22.785310   64076 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:21:22.800811   64076 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.810781   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.810819   64076 pod_ready.go:82] duration metric: took 9.97275ms for pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.810832   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "coredns-7c65d6cfc9-pdcjb" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.810841   64076 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.821154   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "etcd-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.821183   64076 pod_ready.go:82] duration metric: took 10.330752ms for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.821194   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "etcd-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.821202   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.827310   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "kube-apiserver-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.827352   64076 pod_ready.go:82] duration metric: took 6.141087ms for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.827365   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "kube-apiserver-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.827374   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:22.870394   64076 pod_ready.go:98] node "no-preload-032884" hosting pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.870419   64076 pod_ready.go:82] duration metric: took 43.034455ms for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	E0920 22:21:22.870428   64076 pod_ready.go:67] WaitExtra: waitPodCondition: node "no-preload-032884" hosting pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-032884" has status "Ready":"False"
	I0920 22:21:22.870434   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-mp98d" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:23.264677   64076 pod_ready.go:93] pod "kube-proxy-mp98d" in "kube-system" namespace has status "Ready":"True"
	I0920 22:21:23.264706   64076 pod_ready.go:82] duration metric: took 394.262896ms for pod "kube-proxy-mp98d" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:23.264719   64076 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:20.866321   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:22.866504   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:21.232197   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:21.732280   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.232223   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:22.731989   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.231835   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:23.731412   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.232159   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.731357   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.231570   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:25.731688   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:24.432658   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:26.930043   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:25.272790   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:27.772961   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:25.366481   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:27.866181   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:26.232179   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:26.731338   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.232243   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:27.731637   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.231903   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.732246   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.231673   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:29.732195   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.231646   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:30.731750   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:28.930324   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:30.930675   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:29.775001   64076 pod_ready.go:103] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:31.771168   64076 pod_ready.go:93] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:21:31.771198   64076 pod_ready.go:82] duration metric: took 8.506469796s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:31.771210   64076 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" ...
	I0920 22:21:30.365604   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:32.366379   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:34.865420   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:31.232139   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:31.732146   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.231399   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:32.732226   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.231350   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.731518   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.231301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:34.731329   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.231639   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:35.731944   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:33.430342   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:35.430373   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:37.430529   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:33.779291   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.277659   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:38.278264   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.865728   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:39.364784   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:36.231341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:36.732138   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.231362   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:37.732107   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.232187   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:38.731650   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.231592   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.732193   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.232273   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:40.732121   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:39.931150   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:42.430690   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:40.778518   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:43.276961   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:41.368031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:43.865439   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:41.231524   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:41.731282   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.232204   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:42.732205   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.232173   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:43.732209   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.231528   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:44.732008   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.231572   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:45.731279   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:45.731374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:45.771664   65164 cri.go:89] found id: ""
	I0920 22:21:45.771685   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.771692   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:45.771697   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:45.771753   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:45.805953   65164 cri.go:89] found id: ""
	I0920 22:21:45.805983   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.805994   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:45.806001   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:45.806070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:45.841111   65164 cri.go:89] found id: ""
	I0920 22:21:45.841140   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.841152   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:45.841159   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:45.841217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:44.430726   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:46.431384   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.277558   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:47.277869   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.865916   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:48.366375   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:45.877729   65164 cri.go:89] found id: ""
	I0920 22:21:45.877750   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.877758   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:45.877764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:45.877814   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:45.919400   65164 cri.go:89] found id: ""
	I0920 22:21:45.919429   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.919440   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:45.919447   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:45.919504   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:45.957169   65164 cri.go:89] found id: ""
	I0920 22:21:45.957199   65164 logs.go:276] 0 containers: []
	W0920 22:21:45.957220   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:45.957233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:45.957289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:46.001079   65164 cri.go:89] found id: ""
	I0920 22:21:46.001110   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.001121   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:46.001128   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:46.001194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:46.037774   65164 cri.go:89] found id: ""
	I0920 22:21:46.037806   65164 logs.go:276] 0 containers: []
	W0920 22:21:46.037818   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:46.037830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:46.037843   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:46.079990   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:46.080016   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:46.129685   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:46.129718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:46.144045   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:46.144078   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:46.282272   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:46.282303   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:46.282318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:48.855633   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:48.871768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:48.871829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:48.911870   65164 cri.go:89] found id: ""
	I0920 22:21:48.911901   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.911913   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:48.911923   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:48.911991   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:48.950456   65164 cri.go:89] found id: ""
	I0920 22:21:48.950481   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.950492   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:48.950500   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:48.950562   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:48.988787   65164 cri.go:89] found id: ""
	I0920 22:21:48.988817   65164 logs.go:276] 0 containers: []
	W0920 22:21:48.988827   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:48.988835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:48.988883   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:49.028550   65164 cri.go:89] found id: ""
	I0920 22:21:49.028580   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.028590   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:49.028600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:49.028659   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:49.066926   65164 cri.go:89] found id: ""
	I0920 22:21:49.066951   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.066977   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:49.066986   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:49.067047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:49.106777   65164 cri.go:89] found id: ""
	I0920 22:21:49.106807   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.106816   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:49.106822   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:49.106879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:49.143823   65164 cri.go:89] found id: ""
	I0920 22:21:49.143856   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.143867   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:49.143875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:49.143935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:49.181471   65164 cri.go:89] found id: ""
	I0920 22:21:49.181496   65164 logs.go:276] 0 containers: []
	W0920 22:21:49.181504   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:49.181512   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:49.181523   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:49.266480   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:49.266517   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:49.326247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:49.326273   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:49.387982   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:49.388022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:49.404125   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:49.404157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:49.485426   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:48.932370   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.430552   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:49.278734   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.777609   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:50.865089   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:53.365218   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:51.985703   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:51.999662   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:51.999723   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:52.035758   65164 cri.go:89] found id: ""
	I0920 22:21:52.035790   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.035800   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:52.035808   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:52.035865   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:52.082846   65164 cri.go:89] found id: ""
	I0920 22:21:52.082869   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.082877   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:52.082883   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:52.082954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:52.119258   65164 cri.go:89] found id: ""
	I0920 22:21:52.119299   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.119311   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:52.119319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:52.119380   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:52.155609   65164 cri.go:89] found id: ""
	I0920 22:21:52.155635   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.155645   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:52.155652   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:52.155710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:52.191493   65164 cri.go:89] found id: ""
	I0920 22:21:52.191519   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.191527   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:52.191534   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:52.191608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:52.227088   65164 cri.go:89] found id: ""
	I0920 22:21:52.227121   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.227131   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:52.227138   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:52.227204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:52.264503   65164 cri.go:89] found id: ""
	I0920 22:21:52.264529   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.264538   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:52.264545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:52.264611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:52.302158   65164 cri.go:89] found id: ""
	I0920 22:21:52.302185   65164 logs.go:276] 0 containers: []
	W0920 22:21:52.302193   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:52.302202   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:52.302217   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:52.356130   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:52.356168   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:52.373072   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:52.373096   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:52.446415   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:52.446437   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:52.446454   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:52.527964   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:52.528009   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.077589   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:55.091560   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:55.091615   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:55.127894   65164 cri.go:89] found id: ""
	I0920 22:21:55.127922   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.127949   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:55.127971   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:55.128030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:55.162522   65164 cri.go:89] found id: ""
	I0920 22:21:55.162546   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.162554   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:55.162562   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:55.162617   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:55.199445   65164 cri.go:89] found id: ""
	I0920 22:21:55.199470   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.199480   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:55.199488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:55.199546   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:55.239316   65164 cri.go:89] found id: ""
	I0920 22:21:55.239340   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.239350   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:55.239357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:55.239418   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:55.278165   65164 cri.go:89] found id: ""
	I0920 22:21:55.278194   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.278204   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:55.278212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:55.278270   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:55.313143   65164 cri.go:89] found id: ""
	I0920 22:21:55.313173   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.313184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:55.313192   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:55.313261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:55.352661   65164 cri.go:89] found id: ""
	I0920 22:21:55.352691   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.352699   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:55.352704   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:55.352751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:55.390108   65164 cri.go:89] found id: ""
	I0920 22:21:55.390147   65164 logs.go:276] 0 containers: []
	W0920 22:21:55.390158   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:55.390169   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:55.390184   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:55.403652   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:55.403680   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:55.490456   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:55.490477   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:55.490488   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:55.573164   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:55.573202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:55.615158   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:55.615188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:53.430808   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.431168   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:57.431504   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:53.778433   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.778670   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:58.278404   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:55.365277   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:57.365842   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:59.365976   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:21:58.165907   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:21:58.180329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:21:58.180397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:21:58.215920   65164 cri.go:89] found id: ""
	I0920 22:21:58.215946   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.215954   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:21:58.215973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:21:58.216030   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:21:58.253178   65164 cri.go:89] found id: ""
	I0920 22:21:58.253211   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.253222   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:21:58.253229   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:21:58.253287   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:21:58.289345   65164 cri.go:89] found id: ""
	I0920 22:21:58.289373   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.289384   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:21:58.289391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:21:58.289451   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:21:58.326430   65164 cri.go:89] found id: ""
	I0920 22:21:58.326460   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.326468   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:21:58.326476   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:21:58.326535   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:21:58.362723   65164 cri.go:89] found id: ""
	I0920 22:21:58.362749   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.362761   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:21:58.362769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:21:58.362824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:21:58.398827   65164 cri.go:89] found id: ""
	I0920 22:21:58.398857   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.398867   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:21:58.398875   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:21:58.398928   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:21:58.444833   65164 cri.go:89] found id: ""
	I0920 22:21:58.444864   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.444875   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:21:58.444883   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:21:58.444948   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:21:58.482692   65164 cri.go:89] found id: ""
	I0920 22:21:58.482723   65164 logs.go:276] 0 containers: []
	W0920 22:21:58.482735   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:21:58.482745   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:21:58.482756   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:21:58.567138   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:21:58.567172   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:21:58.606392   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:21:58.606421   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:21:58.658568   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:21:58.658605   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:21:58.672830   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:21:58.672855   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:21:58.742602   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:21:59.930574   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:02.431132   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:00.279246   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:02.778434   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:01.865439   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:04.366915   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:01.243532   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:01.257408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:01.257493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:01.295259   65164 cri.go:89] found id: ""
	I0920 22:22:01.295284   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.295292   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:01.295297   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:01.295348   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:01.334119   65164 cri.go:89] found id: ""
	I0920 22:22:01.334145   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.334157   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:01.334164   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:01.334222   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:01.376955   65164 cri.go:89] found id: ""
	I0920 22:22:01.376985   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.376997   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:01.377005   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:01.377063   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:01.416680   65164 cri.go:89] found id: ""
	I0920 22:22:01.416703   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.416712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:01.416718   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:01.416783   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:01.455042   65164 cri.go:89] found id: ""
	I0920 22:22:01.455064   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.455072   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:01.455077   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:01.455135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:01.499243   65164 cri.go:89] found id: ""
	I0920 22:22:01.499271   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.499283   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:01.499290   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:01.499347   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:01.535757   65164 cri.go:89] found id: ""
	I0920 22:22:01.535793   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.535804   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:01.535812   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:01.535876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:01.574409   65164 cri.go:89] found id: ""
	I0920 22:22:01.574447   65164 logs.go:276] 0 containers: []
	W0920 22:22:01.574459   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:01.574470   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:01.574485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:01.615629   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:01.615659   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:01.668446   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:01.668481   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:01.683044   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:01.683071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:01.751190   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:01.751210   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:01.751221   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.335881   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:04.351092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:04.351149   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:04.386306   65164 cri.go:89] found id: ""
	I0920 22:22:04.386337   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.386347   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:04.386365   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:04.386424   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:04.419440   65164 cri.go:89] found id: ""
	I0920 22:22:04.419472   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.419502   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:04.419511   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:04.419571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:04.456005   65164 cri.go:89] found id: ""
	I0920 22:22:04.456043   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.456054   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:04.456061   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:04.456125   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:04.490878   65164 cri.go:89] found id: ""
	I0920 22:22:04.490909   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.490919   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:04.490927   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:04.491000   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:04.527312   65164 cri.go:89] found id: ""
	I0920 22:22:04.527342   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.527353   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:04.527361   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:04.527441   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:04.564441   65164 cri.go:89] found id: ""
	I0920 22:22:04.564471   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.564481   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:04.564487   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:04.564542   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:04.598843   65164 cri.go:89] found id: ""
	I0920 22:22:04.598870   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.598878   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:04.598884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:04.598946   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:04.632575   65164 cri.go:89] found id: ""
	I0920 22:22:04.632604   65164 logs.go:276] 0 containers: []
	W0920 22:22:04.632615   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:04.632625   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:04.632640   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:04.683629   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:04.683668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:04.697138   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:04.697169   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:04.768051   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:04.768076   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:04.768087   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:04.848054   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:04.848102   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:04.930934   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.431086   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:05.279145   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.778351   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:06.865475   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:08.865586   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:07.396960   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:07.410612   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:07.410681   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:07.450652   65164 cri.go:89] found id: ""
	I0920 22:22:07.450680   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.450688   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:07.450694   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:07.450739   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:07.491547   65164 cri.go:89] found id: ""
	I0920 22:22:07.491577   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.491588   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:07.491595   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:07.491652   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:07.532025   65164 cri.go:89] found id: ""
	I0920 22:22:07.532060   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.532069   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:07.532076   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:07.532135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:07.574139   65164 cri.go:89] found id: ""
	I0920 22:22:07.574175   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.574186   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:07.574194   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:07.574257   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:07.614572   65164 cri.go:89] found id: ""
	I0920 22:22:07.614599   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.614610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:07.614618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:07.614679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:07.651975   65164 cri.go:89] found id: ""
	I0920 22:22:07.652012   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.652022   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:07.652029   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:07.652091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:07.690273   65164 cri.go:89] found id: ""
	I0920 22:22:07.690300   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.690311   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:07.690319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:07.690379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:07.727144   65164 cri.go:89] found id: ""
	I0920 22:22:07.727169   65164 logs.go:276] 0 containers: []
	W0920 22:22:07.727179   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:07.727189   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:07.727204   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:07.784607   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:07.784645   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:07.800082   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:07.800113   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:07.872842   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:07.872870   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:07.872885   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:07.953169   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:07.953205   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:10.496094   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:10.509645   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:10.509711   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:10.544590   65164 cri.go:89] found id: ""
	I0920 22:22:10.544620   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.544628   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:10.544634   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:10.544690   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:10.581062   65164 cri.go:89] found id: ""
	I0920 22:22:10.581091   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.581100   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:10.581106   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:10.581166   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:10.616337   65164 cri.go:89] found id: ""
	I0920 22:22:10.616363   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.616372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:10.616380   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:10.616438   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:10.655747   65164 cri.go:89] found id: ""
	I0920 22:22:10.655769   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.655777   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:10.655784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:10.655832   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:10.689158   65164 cri.go:89] found id: ""
	I0920 22:22:10.689188   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.689199   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:10.689207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:10.689269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:10.724321   65164 cri.go:89] found id: ""
	I0920 22:22:10.724355   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.724369   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:10.724376   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:10.724447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:10.764105   65164 cri.go:89] found id: ""
	I0920 22:22:10.764134   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.764143   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:10.764148   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:10.764217   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:10.802620   65164 cri.go:89] found id: ""
	I0920 22:22:10.802650   65164 logs.go:276] 0 containers: []
	W0920 22:22:10.802664   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:10.802674   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:10.802687   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:10.854739   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:10.854773   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:09.931917   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:12.430948   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.279412   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:12.778166   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.866822   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:13.366013   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:10.869974   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:10.869996   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:10.945364   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:10.945386   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:10.945399   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:11.029379   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:11.029419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:13.576924   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:13.591641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:13.591716   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:13.626683   65164 cri.go:89] found id: ""
	I0920 22:22:13.626710   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.626718   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:13.626724   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:13.626770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:13.666368   65164 cri.go:89] found id: ""
	I0920 22:22:13.666400   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.666408   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:13.666413   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:13.666469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:13.702596   65164 cri.go:89] found id: ""
	I0920 22:22:13.702618   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.702634   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:13.702642   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:13.702706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:13.736359   65164 cri.go:89] found id: ""
	I0920 22:22:13.736382   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.736390   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:13.736396   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:13.736446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:13.770584   65164 cri.go:89] found id: ""
	I0920 22:22:13.770615   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.770626   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:13.770633   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:13.770693   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:13.808637   65164 cri.go:89] found id: ""
	I0920 22:22:13.808668   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.808680   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:13.808688   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:13.808748   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:13.844429   65164 cri.go:89] found id: ""
	I0920 22:22:13.844476   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.844489   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:13.844497   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:13.844563   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:13.882730   65164 cri.go:89] found id: ""
	I0920 22:22:13.882759   65164 logs.go:276] 0 containers: []
	W0920 22:22:13.882770   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:13.882778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:13.882790   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:13.939998   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:13.940028   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:13.954014   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:13.954043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:14.031991   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:14.032016   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:14.032030   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:14.109378   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:14.109413   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:14.431794   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:16.932980   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:15.277777   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:17.779835   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:15.865687   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:17.865735   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:19.866465   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:16.646761   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:16.660458   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:16.660522   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:16.695512   65164 cri.go:89] found id: ""
	I0920 22:22:16.695537   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.695545   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:16.695552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:16.695612   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:16.740479   65164 cri.go:89] found id: ""
	I0920 22:22:16.740508   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.740518   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:16.740529   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:16.740589   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:16.777259   65164 cri.go:89] found id: ""
	I0920 22:22:16.777291   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.777302   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:16.777309   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:16.777355   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:16.833157   65164 cri.go:89] found id: ""
	I0920 22:22:16.833187   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.833204   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:16.833212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:16.833272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:16.868589   65164 cri.go:89] found id: ""
	I0920 22:22:16.868618   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.868627   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:16.868635   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:16.868694   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:16.907229   65164 cri.go:89] found id: ""
	I0920 22:22:16.907257   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.907269   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:16.907277   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:16.907336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:16.945317   65164 cri.go:89] found id: ""
	I0920 22:22:16.945343   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.945351   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:16.945357   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:16.945413   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:16.982159   65164 cri.go:89] found id: ""
	I0920 22:22:16.982183   65164 logs.go:276] 0 containers: []
	W0920 22:22:16.982197   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:16.982206   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:16.982218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:17.034369   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:17.034416   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:17.049644   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:17.049677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:17.127816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:17.127840   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:17.127852   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:17.217213   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:17.217248   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:19.766957   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:19.781545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:19.781627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:19.819616   65164 cri.go:89] found id: ""
	I0920 22:22:19.819642   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.819650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:19.819656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:19.819717   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:19.860983   65164 cri.go:89] found id: ""
	I0920 22:22:19.861009   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.861020   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:19.861027   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:19.861083   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:19.904051   65164 cri.go:89] found id: ""
	I0920 22:22:19.904075   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.904083   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:19.904088   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:19.904153   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:19.940246   65164 cri.go:89] found id: ""
	I0920 22:22:19.940273   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.940280   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:19.940286   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:19.940332   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:19.980380   65164 cri.go:89] found id: ""
	I0920 22:22:19.980404   65164 logs.go:276] 0 containers: []
	W0920 22:22:19.980412   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:19.980418   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:19.980496   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:20.019603   65164 cri.go:89] found id: ""
	I0920 22:22:20.019625   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.019633   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:20.019639   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:20.019689   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:20.060667   65164 cri.go:89] found id: ""
	I0920 22:22:20.060696   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.060704   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:20.060709   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:20.060766   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:20.104432   65164 cri.go:89] found id: ""
	I0920 22:22:20.104464   65164 logs.go:276] 0 containers: []
	W0920 22:22:20.104475   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:20.104486   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:20.104500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:20.148433   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:20.148466   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:20.202370   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:20.202407   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:20.216482   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:20.216515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:20.297053   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:20.297071   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:20.297086   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:18.936098   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:21.430878   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:20.277726   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.777347   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.366027   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:24.366241   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:22.879569   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:22.893624   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:22.893683   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:22.937850   65164 cri.go:89] found id: ""
	I0920 22:22:22.937878   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.937886   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:22.937896   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:22.937944   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:22.974939   65164 cri.go:89] found id: ""
	I0920 22:22:22.974971   65164 logs.go:276] 0 containers: []
	W0920 22:22:22.974983   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:22.974990   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:22.975057   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:23.013689   65164 cri.go:89] found id: ""
	I0920 22:22:23.013723   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.013735   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:23.013742   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:23.013802   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:23.052092   65164 cri.go:89] found id: ""
	I0920 22:22:23.052123   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.052134   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:23.052141   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:23.052201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:23.092204   65164 cri.go:89] found id: ""
	I0920 22:22:23.092231   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.092248   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:23.092255   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:23.092317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:23.146265   65164 cri.go:89] found id: ""
	I0920 22:22:23.146294   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.146304   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:23.146312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:23.146373   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:23.183802   65164 cri.go:89] found id: ""
	I0920 22:22:23.183832   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.183843   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:23.183865   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:23.183932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:23.219604   65164 cri.go:89] found id: ""
	I0920 22:22:23.219633   65164 logs.go:276] 0 containers: []
	W0920 22:22:23.219643   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:23.219654   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:23.219671   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:23.268138   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:23.268171   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:23.285309   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:23.285343   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:23.362940   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:23.362985   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:23.363006   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:23.450369   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:23.450411   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:23.431589   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:25.930565   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:24.778456   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:27.278474   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:26.866577   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:29.364505   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:26.001038   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:26.016893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:26.016956   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:26.053611   65164 cri.go:89] found id: ""
	I0920 22:22:26.053641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.053653   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:26.053661   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:26.053710   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:26.093966   65164 cri.go:89] found id: ""
	I0920 22:22:26.093994   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.094005   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:26.094012   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:26.094077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:26.134321   65164 cri.go:89] found id: ""
	I0920 22:22:26.134360   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.134372   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:26.134379   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:26.134442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:26.171597   65164 cri.go:89] found id: ""
	I0920 22:22:26.171641   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.171651   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:26.171659   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:26.171730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:26.207751   65164 cri.go:89] found id: ""
	I0920 22:22:26.207778   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.207787   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:26.207793   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:26.207845   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:26.249174   65164 cri.go:89] found id: ""
	I0920 22:22:26.249201   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.249217   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:26.249225   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:26.249286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:26.284627   65164 cri.go:89] found id: ""
	I0920 22:22:26.284656   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.284666   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:26.284673   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:26.284729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:26.323942   65164 cri.go:89] found id: ""
	I0920 22:22:26.323968   65164 logs.go:276] 0 containers: []
	W0920 22:22:26.323980   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:26.323991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:26.324005   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:26.379698   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:26.379737   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:26.394553   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:26.394584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:26.472301   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:26.472328   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:26.472354   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:26.549464   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:26.549509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.091838   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:29.115120   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:29.115198   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:29.161609   65164 cri.go:89] found id: ""
	I0920 22:22:29.161632   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.161640   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:29.161646   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:29.161709   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:29.197886   65164 cri.go:89] found id: ""
	I0920 22:22:29.197912   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.197920   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:29.197926   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:29.197978   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:29.236953   65164 cri.go:89] found id: ""
	I0920 22:22:29.236980   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.236991   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:29.236999   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:29.237059   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:29.273573   65164 cri.go:89] found id: ""
	I0920 22:22:29.273603   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.273614   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:29.273622   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:29.273679   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:29.308752   65164 cri.go:89] found id: ""
	I0920 22:22:29.308776   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.308784   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:29.308789   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:29.308847   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:29.348578   65164 cri.go:89] found id: ""
	I0920 22:22:29.348608   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.348620   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:29.348627   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:29.348685   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:29.385184   65164 cri.go:89] found id: ""
	I0920 22:22:29.385225   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.385237   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:29.385246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:29.385308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:29.421935   65164 cri.go:89] found id: ""
	I0920 22:22:29.421968   65164 logs.go:276] 0 containers: []
	W0920 22:22:29.421979   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:29.421990   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:29.422012   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:29.438574   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:29.438606   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:29.515334   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:29.515362   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:29.515376   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:29.602411   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:29.602451   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:29.646284   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:29.646320   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:27.931022   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:30.430935   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:32.431673   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:29.278850   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:31.778403   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:31.364592   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:33.864487   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:32.196699   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:32.211260   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:32.211338   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:32.248007   65164 cri.go:89] found id: ""
	I0920 22:22:32.248040   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.248052   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:32.248060   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:32.248122   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:32.285451   65164 cri.go:89] found id: ""
	I0920 22:22:32.285481   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.285490   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:32.285496   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:32.285548   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:32.321721   65164 cri.go:89] found id: ""
	I0920 22:22:32.321746   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.321756   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:32.321768   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:32.321827   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:32.363669   65164 cri.go:89] found id: ""
	I0920 22:22:32.363698   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.363708   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:32.363717   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:32.363770   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:32.402681   65164 cri.go:89] found id: ""
	I0920 22:22:32.402721   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.402729   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:32.402735   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:32.402786   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:32.445648   65164 cri.go:89] found id: ""
	I0920 22:22:32.445674   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.445684   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:32.445691   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:32.445750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:32.482291   65164 cri.go:89] found id: ""
	I0920 22:22:32.482315   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.482330   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:32.482337   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:32.482397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:32.523095   65164 cri.go:89] found id: ""
	I0920 22:22:32.523124   65164 logs.go:276] 0 containers: []
	W0920 22:22:32.523134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:32.523145   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:32.523160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:32.537459   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:32.537489   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:32.614828   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:32.614853   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:32.614867   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:32.695690   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:32.695728   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:32.739620   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:32.739658   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:35.296323   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:35.312341   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:35.312417   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:35.349934   65164 cri.go:89] found id: ""
	I0920 22:22:35.349962   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.349972   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:35.349980   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:35.350052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:35.385427   65164 cri.go:89] found id: ""
	I0920 22:22:35.385457   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.385468   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:35.385475   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:35.385538   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:35.422118   65164 cri.go:89] found id: ""
	I0920 22:22:35.422146   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.422157   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:35.422165   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:35.422230   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:35.458552   65164 cri.go:89] found id: ""
	I0920 22:22:35.458573   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.458580   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:35.458586   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:35.458639   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:35.493575   65164 cri.go:89] found id: ""
	I0920 22:22:35.493601   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.493610   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:35.493615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:35.493671   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:35.530316   65164 cri.go:89] found id: ""
	I0920 22:22:35.530342   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.530350   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:35.530356   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:35.530411   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:35.568247   65164 cri.go:89] found id: ""
	I0920 22:22:35.568279   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.568289   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:35.568303   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:35.568367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:35.605207   65164 cri.go:89] found id: ""
	I0920 22:22:35.605234   65164 logs.go:276] 0 containers: []
	W0920 22:22:35.605242   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:35.605255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:35.605267   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:35.619771   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:35.619797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:35.699917   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:35.699939   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:35.699950   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:35.785310   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:35.785346   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:35.849112   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:35.849147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:34.930225   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:36.931840   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:34.277310   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:36.278180   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:35.866985   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.364684   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.405695   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:38.419499   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:38.419580   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:38.461618   65164 cri.go:89] found id: ""
	I0920 22:22:38.461642   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.461650   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:38.461656   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:38.461713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:38.504837   65164 cri.go:89] found id: ""
	I0920 22:22:38.504859   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.504867   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:38.504872   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:38.504923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:38.539399   65164 cri.go:89] found id: ""
	I0920 22:22:38.539434   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.539446   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:38.539453   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:38.539518   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:38.576129   65164 cri.go:89] found id: ""
	I0920 22:22:38.576155   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.576165   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:38.576173   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:38.576233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:38.611383   65164 cri.go:89] found id: ""
	I0920 22:22:38.611411   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.611423   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:38.611430   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:38.611491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:38.650553   65164 cri.go:89] found id: ""
	I0920 22:22:38.650583   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.650594   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:38.650602   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:38.650668   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:38.686734   65164 cri.go:89] found id: ""
	I0920 22:22:38.686756   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.686764   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:38.686769   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:38.686829   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:38.726268   65164 cri.go:89] found id: ""
	I0920 22:22:38.726301   65164 logs.go:276] 0 containers: []
	W0920 22:22:38.726312   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:38.726323   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:38.726336   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:38.801995   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:38.802042   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:38.843775   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:38.843811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:38.897874   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:38.897909   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:38.912111   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:38.912136   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:38.989021   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:38.932311   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.432699   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:38.778194   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.277875   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:43.278108   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:40.364970   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:42.366255   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:44.865678   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:41.489341   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:41.515667   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:41.515724   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:41.567019   65164 cri.go:89] found id: ""
	I0920 22:22:41.567047   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.567057   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:41.567066   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:41.567128   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:41.625316   65164 cri.go:89] found id: ""
	I0920 22:22:41.625347   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.625357   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:41.625371   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:41.625436   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:41.663420   65164 cri.go:89] found id: ""
	I0920 22:22:41.663448   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.663459   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:41.663467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:41.663529   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:41.701639   65164 cri.go:89] found id: ""
	I0920 22:22:41.701670   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.701682   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:41.701689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:41.701751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:41.739740   65164 cri.go:89] found id: ""
	I0920 22:22:41.739768   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.739778   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:41.739786   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:41.739843   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:41.781365   65164 cri.go:89] found id: ""
	I0920 22:22:41.781390   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.781401   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:41.781408   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:41.781467   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:41.818708   65164 cri.go:89] found id: ""
	I0920 22:22:41.818732   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.818740   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:41.818746   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:41.818793   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:41.854193   65164 cri.go:89] found id: ""
	I0920 22:22:41.854226   65164 logs.go:276] 0 containers: []
	W0920 22:22:41.854237   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:41.854249   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:41.854266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:41.923678   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:41.923701   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:41.923716   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:42.006009   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:42.006045   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:42.044745   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:42.044777   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:42.099462   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:42.099500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:44.614787   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:44.629100   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:44.629178   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:44.672347   65164 cri.go:89] found id: ""
	I0920 22:22:44.672381   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.672392   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:44.672401   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:44.672525   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:44.717617   65164 cri.go:89] found id: ""
	I0920 22:22:44.717646   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.717657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:44.717664   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:44.717730   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:44.753035   65164 cri.go:89] found id: ""
	I0920 22:22:44.753062   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.753073   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:44.753092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:44.753165   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:44.790745   65164 cri.go:89] found id: ""
	I0920 22:22:44.790774   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.790785   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:44.790792   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:44.790859   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:44.828604   65164 cri.go:89] found id: ""
	I0920 22:22:44.828630   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.828640   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:44.828647   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:44.828706   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:44.865982   65164 cri.go:89] found id: ""
	I0920 22:22:44.866008   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.866019   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:44.866026   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:44.866077   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:44.903473   65164 cri.go:89] found id: ""
	I0920 22:22:44.903499   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.903508   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:44.903514   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:44.903571   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:44.943481   65164 cri.go:89] found id: ""
	I0920 22:22:44.943509   65164 logs.go:276] 0 containers: []
	W0920 22:22:44.943521   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:44.943530   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:44.943543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:44.996783   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:44.996823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:45.011531   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:45.011560   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:45.087225   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:45.087252   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:45.087266   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:45.166460   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:45.166500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:43.931267   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:46.430497   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:45.778462   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.783057   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.365053   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:49.864478   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:47.709426   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:47.723507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:47.723590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:47.758203   65164 cri.go:89] found id: ""
	I0920 22:22:47.758228   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.758238   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:47.758246   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:47.758304   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:47.796098   65164 cri.go:89] found id: ""
	I0920 22:22:47.796129   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.796140   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:47.796148   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:47.796210   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:47.831860   65164 cri.go:89] found id: ""
	I0920 22:22:47.831889   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.831899   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:47.831906   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:47.831966   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:47.866241   65164 cri.go:89] found id: ""
	I0920 22:22:47.866266   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.866274   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:47.866281   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:47.866337   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:47.905029   65164 cri.go:89] found id: ""
	I0920 22:22:47.905055   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.905063   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:47.905073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:47.905124   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:47.942290   65164 cri.go:89] found id: ""
	I0920 22:22:47.942322   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.942332   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:47.942340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:47.942400   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:47.977901   65164 cri.go:89] found id: ""
	I0920 22:22:47.977930   65164 logs.go:276] 0 containers: []
	W0920 22:22:47.977939   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:47.977946   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:47.978012   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:48.014682   65164 cri.go:89] found id: ""
	I0920 22:22:48.014715   65164 logs.go:276] 0 containers: []
	W0920 22:22:48.014726   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:48.014736   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:48.014749   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:48.067116   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:48.067160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:48.082926   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:48.082980   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:48.158209   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:48.158231   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:48.158243   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:48.239760   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:48.239866   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:50.788520   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:50.802410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:50.802485   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:50.844263   65164 cri.go:89] found id: ""
	I0920 22:22:50.844291   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.844303   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:50.844311   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:50.844367   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:48.931174   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:51.431910   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:50.277510   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:52.279387   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:51.866270   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:54.366161   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:50.880271   65164 cri.go:89] found id: ""
	I0920 22:22:50.880301   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.880311   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:50.880324   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:50.880379   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:50.915627   65164 cri.go:89] found id: ""
	I0920 22:22:50.915653   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.915663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:50.915671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:50.915725   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:50.951779   65164 cri.go:89] found id: ""
	I0920 22:22:50.951804   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.951823   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:50.951831   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:50.951879   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:50.989174   65164 cri.go:89] found id: ""
	I0920 22:22:50.989201   65164 logs.go:276] 0 containers: []
	W0920 22:22:50.989212   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:50.989219   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:50.989293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:51.024049   65164 cri.go:89] found id: ""
	I0920 22:22:51.024084   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.024094   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:51.024106   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:51.024167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:51.060837   65164 cri.go:89] found id: ""
	I0920 22:22:51.060865   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.060877   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:51.060884   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:51.060934   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:51.099248   65164 cri.go:89] found id: ""
	I0920 22:22:51.099272   65164 logs.go:276] 0 containers: []
	W0920 22:22:51.099285   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:51.099295   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:51.099308   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:51.148388   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:51.148420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:51.162735   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:51.162763   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:51.239020   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:51.239048   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:51.239063   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:51.318893   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:51.318934   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:53.860664   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:53.874434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:53.874491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:53.909449   65164 cri.go:89] found id: ""
	I0920 22:22:53.909475   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.909485   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:53.909493   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:53.909550   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:53.944367   65164 cri.go:89] found id: ""
	I0920 22:22:53.944395   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.944406   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:53.944414   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:53.944469   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:53.980994   65164 cri.go:89] found id: ""
	I0920 22:22:53.981032   65164 logs.go:276] 0 containers: []
	W0920 22:22:53.981043   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:53.981051   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:53.981110   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:54.018456   65164 cri.go:89] found id: ""
	I0920 22:22:54.018494   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.018506   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:54.018515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:54.018574   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:54.053763   65164 cri.go:89] found id: ""
	I0920 22:22:54.053793   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.053805   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:54.053813   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:54.053871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:54.091847   65164 cri.go:89] found id: ""
	I0920 22:22:54.091871   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.091883   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:54.091890   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:54.091951   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:54.126844   65164 cri.go:89] found id: ""
	I0920 22:22:54.126875   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.126885   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:54.126893   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:54.126949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:54.168967   65164 cri.go:89] found id: ""
	I0920 22:22:54.168999   65164 logs.go:276] 0 containers: []
	W0920 22:22:54.169010   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:54.169022   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:54.169043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:54.237771   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:54.237791   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:54.237806   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:54.319629   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:54.319668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:22:54.364470   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:54.364504   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:54.418798   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:54.418835   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:53.931126   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.430892   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:54.777269   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.777539   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.865494   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:59.365274   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:56.935104   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:22:56.948472   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:22:56.948545   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:22:56.983338   65164 cri.go:89] found id: ""
	I0920 22:22:56.983365   65164 logs.go:276] 0 containers: []
	W0920 22:22:56.983373   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:22:56.983379   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:22:56.983425   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:22:57.021781   65164 cri.go:89] found id: ""
	I0920 22:22:57.021809   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.021819   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:22:57.021825   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:22:57.021871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:22:57.057191   65164 cri.go:89] found id: ""
	I0920 22:22:57.057231   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.057239   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:22:57.057246   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:22:57.057294   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:22:57.092567   65164 cri.go:89] found id: ""
	I0920 22:22:57.092595   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.092607   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:22:57.092615   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:22:57.092669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:22:57.128098   65164 cri.go:89] found id: ""
	I0920 22:22:57.128125   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.128136   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:22:57.128143   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:22:57.128205   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:22:57.163001   65164 cri.go:89] found id: ""
	I0920 22:22:57.163029   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.163040   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:22:57.163048   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:22:57.163105   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:22:57.199392   65164 cri.go:89] found id: ""
	I0920 22:22:57.199424   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.199434   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:22:57.199442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:22:57.199491   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:22:57.235132   65164 cri.go:89] found id: ""
	I0920 22:22:57.235155   65164 logs.go:276] 0 containers: []
	W0920 22:22:57.235162   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:22:57.235170   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:22:57.235181   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:57.287112   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:22:57.287148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:22:57.301293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:22:57.301318   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:22:57.379895   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:22:57.379920   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:22:57.379935   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:22:57.466022   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:22:57.466059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.006738   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:00.019871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:00.019949   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:00.054127   65164 cri.go:89] found id: ""
	I0920 22:23:00.054155   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.054162   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:00.054168   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:00.054228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:00.089755   65164 cri.go:89] found id: ""
	I0920 22:23:00.089788   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.089800   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:00.089807   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:00.089867   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:00.125466   65164 cri.go:89] found id: ""
	I0920 22:23:00.125489   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.125497   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:00.125503   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:00.125559   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:00.166147   65164 cri.go:89] found id: ""
	I0920 22:23:00.166171   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.166178   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:00.166187   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:00.166248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:00.202752   65164 cri.go:89] found id: ""
	I0920 22:23:00.202781   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.202793   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:00.202801   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:00.202862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:00.240727   65164 cri.go:89] found id: ""
	I0920 22:23:00.240757   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.240768   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:00.240776   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:00.240833   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:00.277247   65164 cri.go:89] found id: ""
	I0920 22:23:00.277270   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.277278   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:00.277285   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:00.277341   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:00.313098   65164 cri.go:89] found id: ""
	I0920 22:23:00.313123   65164 logs.go:276] 0 containers: []
	W0920 22:23:00.313134   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:00.313144   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:00.313160   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:00.328326   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:00.328353   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:00.397929   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:00.397961   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:00.397975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:00.479830   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:00.479862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:00.519715   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:00.519745   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:22:58.929948   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:00.930012   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:22:58.778239   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:00.778442   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.279610   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:01.366521   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.366771   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:03.073360   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:03.088169   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:03.088236   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:03.129646   65164 cri.go:89] found id: ""
	I0920 22:23:03.129673   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.129682   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:03.129690   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:03.129751   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:03.163771   65164 cri.go:89] found id: ""
	I0920 22:23:03.163801   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.163812   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:03.163820   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:03.163876   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:03.201253   65164 cri.go:89] found id: ""
	I0920 22:23:03.201291   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.201299   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:03.201317   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:03.201374   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:03.239623   65164 cri.go:89] found id: ""
	I0920 22:23:03.239652   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.239663   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:03.239670   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:03.239731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:03.275530   65164 cri.go:89] found id: ""
	I0920 22:23:03.275555   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.275564   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:03.275572   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:03.275631   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:03.312812   65164 cri.go:89] found id: ""
	I0920 22:23:03.312845   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.312855   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:03.312863   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:03.312923   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:03.352975   65164 cri.go:89] found id: ""
	I0920 22:23:03.352998   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.353009   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:03.353016   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:03.353070   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:03.390815   65164 cri.go:89] found id: ""
	I0920 22:23:03.390846   65164 logs.go:276] 0 containers: []
	W0920 22:23:03.390856   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:03.390867   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:03.390880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:03.478231   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:03.478269   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:03.524991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:03.525024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:03.580114   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:03.580159   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:03.596120   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:03.596147   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:03.671302   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:02.931281   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.429671   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:07.430177   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.781406   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:08.278876   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:05.367069   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:07.866058   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:09.866187   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:06.171484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:06.186410   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:06.186480   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:06.222102   65164 cri.go:89] found id: ""
	I0920 22:23:06.222131   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.222139   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:06.222148   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:06.222206   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:06.257922   65164 cri.go:89] found id: ""
	I0920 22:23:06.257945   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.257953   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:06.257958   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:06.258006   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:06.299149   65164 cri.go:89] found id: ""
	I0920 22:23:06.299176   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.299184   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:06.299190   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:06.299244   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:06.336667   65164 cri.go:89] found id: ""
	I0920 22:23:06.336693   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.336703   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:06.336710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:06.336772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:06.379397   65164 cri.go:89] found id: ""
	I0920 22:23:06.379425   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.379435   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:06.379442   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:06.379505   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:06.415945   65164 cri.go:89] found id: ""
	I0920 22:23:06.415976   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.415987   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:06.415993   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:06.416051   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:06.452902   65164 cri.go:89] found id: ""
	I0920 22:23:06.452932   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.452955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:06.452962   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:06.453033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:06.490284   65164 cri.go:89] found id: ""
	I0920 22:23:06.490314   65164 logs.go:276] 0 containers: []
	W0920 22:23:06.490324   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:06.490334   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:06.490351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:06.529838   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:06.529862   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:06.582421   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:06.582461   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:06.596504   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:06.596542   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:06.667816   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:06.667836   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:06.667851   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.247952   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:09.264316   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:09.264386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:09.300577   65164 cri.go:89] found id: ""
	I0920 22:23:09.300612   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.300621   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:09.300627   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:09.300674   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:09.345945   65164 cri.go:89] found id: ""
	I0920 22:23:09.345972   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.345982   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:09.345991   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:09.346049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:09.385660   65164 cri.go:89] found id: ""
	I0920 22:23:09.385690   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.385700   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:09.385708   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:09.385775   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:09.422286   65164 cri.go:89] found id: ""
	I0920 22:23:09.422321   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.422332   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:09.422338   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:09.422397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:09.458377   65164 cri.go:89] found id: ""
	I0920 22:23:09.458402   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.458410   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:09.458416   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:09.458477   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:09.499470   65164 cri.go:89] found id: ""
	I0920 22:23:09.499498   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.499509   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:09.499517   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:09.499603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:09.535725   65164 cri.go:89] found id: ""
	I0920 22:23:09.535751   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.535759   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:09.535764   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:09.535812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:09.575755   65164 cri.go:89] found id: ""
	I0920 22:23:09.575784   65164 logs.go:276] 0 containers: []
	W0920 22:23:09.575795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:09.575807   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:09.575825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:09.590265   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:09.590294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:09.661559   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:09.661585   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:09.661617   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:09.741099   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:09.741143   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:09.785560   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:09.785589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:09.431051   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:11.930405   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:10.781647   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:13.278000   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:12.366060   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:14.864332   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:12.344085   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:12.358919   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:12.359021   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:12.397182   65164 cri.go:89] found id: ""
	I0920 22:23:12.397220   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.397229   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:12.397235   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:12.397293   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:12.435133   65164 cri.go:89] found id: ""
	I0920 22:23:12.435162   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.435169   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:12.435177   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:12.435237   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:12.474391   65164 cri.go:89] found id: ""
	I0920 22:23:12.474416   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.474427   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:12.474434   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:12.474493   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:12.511992   65164 cri.go:89] found id: ""
	I0920 22:23:12.512015   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.512022   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:12.512028   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:12.512071   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:12.547248   65164 cri.go:89] found id: ""
	I0920 22:23:12.547269   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.547278   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:12.547289   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:12.547336   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:12.582721   65164 cri.go:89] found id: ""
	I0920 22:23:12.582747   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.582759   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:12.582766   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:12.582837   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:12.620184   65164 cri.go:89] found id: ""
	I0920 22:23:12.620207   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.620215   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:12.620220   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:12.620272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:12.657416   65164 cri.go:89] found id: ""
	I0920 22:23:12.657441   65164 logs.go:276] 0 containers: []
	W0920 22:23:12.657449   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:12.657458   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:12.657477   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:12.671189   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:12.671218   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:12.742115   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:12.742139   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:12.742150   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:12.817551   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:12.817586   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:12.861940   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:12.861970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:15.413615   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:15.427127   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:15.427201   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:15.462329   65164 cri.go:89] found id: ""
	I0920 22:23:15.462357   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.462366   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:15.462373   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:15.462430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:15.508791   65164 cri.go:89] found id: ""
	I0920 22:23:15.508816   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.508824   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:15.508834   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:15.508894   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:15.545381   65164 cri.go:89] found id: ""
	I0920 22:23:15.545403   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.545411   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:15.545417   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:15.545475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:15.585401   65164 cri.go:89] found id: ""
	I0920 22:23:15.585422   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.585430   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:15.585436   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:15.585494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:15.631507   65164 cri.go:89] found id: ""
	I0920 22:23:15.631529   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.631542   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:15.631549   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:15.631608   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:15.671219   65164 cri.go:89] found id: ""
	I0920 22:23:15.671247   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.671270   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:15.671278   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:15.671333   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:15.711348   65164 cri.go:89] found id: ""
	I0920 22:23:15.711382   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.711393   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:15.711400   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:15.711463   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:15.750582   65164 cri.go:89] found id: ""
	I0920 22:23:15.750614   65164 logs.go:276] 0 containers: []
	W0920 22:23:15.750624   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:15.750635   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:15.750649   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:15.766294   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:15.766321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:15.839079   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:15.839104   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:15.839117   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:14.430673   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:16.430824   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:15.278107   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:17.778678   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:16.866212   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:18.866838   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:15.920123   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:15.920156   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:15.964292   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:15.964321   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.517484   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:18.531117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:18.531192   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:18.567351   65164 cri.go:89] found id: ""
	I0920 22:23:18.567377   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.567389   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:18.567397   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:18.567447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:18.603319   65164 cri.go:89] found id: ""
	I0920 22:23:18.603344   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.603353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:18.603358   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:18.603408   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:18.639699   65164 cri.go:89] found id: ""
	I0920 22:23:18.639726   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.639737   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:18.639744   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:18.639806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:18.681880   65164 cri.go:89] found id: ""
	I0920 22:23:18.681910   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.681931   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:18.681939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:18.682002   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:18.716589   65164 cri.go:89] found id: ""
	I0920 22:23:18.716619   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.716629   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:18.716637   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:18.716696   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:18.755280   65164 cri.go:89] found id: ""
	I0920 22:23:18.755310   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.755321   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:18.755329   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:18.755390   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:18.791915   65164 cri.go:89] found id: ""
	I0920 22:23:18.791944   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.791955   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:18.791963   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:18.792033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:18.831544   65164 cri.go:89] found id: ""
	I0920 22:23:18.831568   65164 logs.go:276] 0 containers: []
	W0920 22:23:18.831576   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:18.831584   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:18.831593   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:18.882024   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:18.882059   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:18.898191   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:18.898226   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:18.970613   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:18.970643   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:18.970657   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:19.046112   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:19.046145   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:18.931618   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.431146   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:20.278885   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:22.777640   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.366031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:23.865349   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:21.586078   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:21.599467   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:21.599541   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:21.638708   65164 cri.go:89] found id: ""
	I0920 22:23:21.638733   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.638741   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:21.638747   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:21.638851   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:21.680546   65164 cri.go:89] found id: ""
	I0920 22:23:21.680574   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.680583   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:21.680589   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:21.680643   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:21.724122   65164 cri.go:89] found id: ""
	I0920 22:23:21.724149   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.724160   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:21.724167   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:21.724224   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:21.758447   65164 cri.go:89] found id: ""
	I0920 22:23:21.758474   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.758482   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:21.758488   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:21.758554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:21.799562   65164 cri.go:89] found id: ""
	I0920 22:23:21.799586   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.799594   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:21.799600   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:21.799658   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:21.836768   65164 cri.go:89] found id: ""
	I0920 22:23:21.836799   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.836808   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:21.836814   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:21.836871   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:21.876750   65164 cri.go:89] found id: ""
	I0920 22:23:21.876770   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.876778   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:21.876784   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:21.876873   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:21.912756   65164 cri.go:89] found id: ""
	I0920 22:23:21.912783   65164 logs.go:276] 0 containers: []
	W0920 22:23:21.912792   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:21.912800   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:21.912813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:21.990168   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:21.990191   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:21.990202   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:22.094757   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:22.094797   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:22.152247   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:22.152280   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:22.205505   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:22.205543   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:24.720784   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:24.734305   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:24.734369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:24.771142   65164 cri.go:89] found id: ""
	I0920 22:23:24.771171   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.771182   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:24.771190   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:24.771248   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:24.807182   65164 cri.go:89] found id: ""
	I0920 22:23:24.807208   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.807216   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:24.807221   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:24.807278   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:24.840805   65164 cri.go:89] found id: ""
	I0920 22:23:24.840836   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.840847   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:24.840854   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:24.840910   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:24.877583   65164 cri.go:89] found id: ""
	I0920 22:23:24.877609   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.877617   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:24.877623   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:24.877677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:24.913485   65164 cri.go:89] found id: ""
	I0920 22:23:24.913518   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.913529   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:24.913536   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:24.913594   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:24.949441   65164 cri.go:89] found id: ""
	I0920 22:23:24.949466   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.949475   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:24.949481   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:24.949527   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:24.985355   65164 cri.go:89] found id: ""
	I0920 22:23:24.985377   65164 logs.go:276] 0 containers: []
	W0920 22:23:24.985386   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:24.985391   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:24.985442   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:25.021343   65164 cri.go:89] found id: ""
	I0920 22:23:25.021367   65164 logs.go:276] 0 containers: []
	W0920 22:23:25.021375   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:25.021383   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:25.021393   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:25.063475   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:25.063509   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:25.113397   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:25.113436   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:25.127946   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:25.127981   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:25.202313   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:25.202334   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:25.202349   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:23.431756   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:25.929621   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:24.778308   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.277297   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:25.865413   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.866883   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:27.784353   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:27.800363   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:27.800439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:27.861988   65164 cri.go:89] found id: ""
	I0920 22:23:27.862016   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.862026   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:27.862034   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:27.862093   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:27.925087   65164 cri.go:89] found id: ""
	I0920 22:23:27.925114   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.925126   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:27.925133   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:27.925194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:27.959757   65164 cri.go:89] found id: ""
	I0920 22:23:27.959785   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.959796   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:27.959804   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:27.959857   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:27.998853   65164 cri.go:89] found id: ""
	I0920 22:23:27.998881   65164 logs.go:276] 0 containers: []
	W0920 22:23:27.998891   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:27.998898   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:27.998955   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:28.034316   65164 cri.go:89] found id: ""
	I0920 22:23:28.034338   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.034347   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:28.034352   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:28.034402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:28.068834   65164 cri.go:89] found id: ""
	I0920 22:23:28.068857   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.068865   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:28.068871   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:28.068921   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:28.103599   65164 cri.go:89] found id: ""
	I0920 22:23:28.103631   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.103642   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:28.103649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:28.103713   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:28.139211   65164 cri.go:89] found id: ""
	I0920 22:23:28.139235   65164 logs.go:276] 0 containers: []
	W0920 22:23:28.139243   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:28.139250   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:28.139261   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:28.191788   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:28.191825   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:28.208453   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:28.208487   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:28.280469   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:28.280488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:28.280500   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:28.357234   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:28.357277   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:27.931829   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.430915   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:29.278181   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:31.777506   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.365534   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:32.366025   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:34.864764   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:30.903301   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:30.916515   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:30.916595   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:30.950403   65164 cri.go:89] found id: ""
	I0920 22:23:30.950427   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.950435   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:30.950442   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:30.950488   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:30.987361   65164 cri.go:89] found id: ""
	I0920 22:23:30.987383   65164 logs.go:276] 0 containers: []
	W0920 22:23:30.987392   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:30.987398   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:30.987447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:31.025688   65164 cri.go:89] found id: ""
	I0920 22:23:31.025712   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.025720   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:31.025727   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:31.025789   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:31.064227   65164 cri.go:89] found id: ""
	I0920 22:23:31.064253   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.064261   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:31.064268   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:31.064313   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:31.100029   65164 cri.go:89] found id: ""
	I0920 22:23:31.100059   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.100067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:31.100073   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:31.100118   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:31.134290   65164 cri.go:89] found id: ""
	I0920 22:23:31.134328   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.134339   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:31.134346   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:31.134404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:31.174453   65164 cri.go:89] found id: ""
	I0920 22:23:31.174483   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.174493   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:31.174500   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:31.174558   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:31.209768   65164 cri.go:89] found id: ""
	I0920 22:23:31.209792   65164 logs.go:276] 0 containers: []
	W0920 22:23:31.209801   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:31.209812   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:31.209823   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:31.288640   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:31.288674   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:31.329681   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:31.329707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:31.388046   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:31.388090   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:31.402872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:31.402900   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:31.476528   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:33.977657   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:33.991331   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:33.991407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:34.027932   65164 cri.go:89] found id: ""
	I0920 22:23:34.027960   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.027970   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:34.027979   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:34.028036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:34.070369   65164 cri.go:89] found id: ""
	I0920 22:23:34.070395   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.070403   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:34.070411   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:34.070471   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:34.107467   65164 cri.go:89] found id: ""
	I0920 22:23:34.107493   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.107501   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:34.107507   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:34.107556   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:34.148177   65164 cri.go:89] found id: ""
	I0920 22:23:34.148203   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.148211   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:34.148217   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:34.148274   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:34.185721   65164 cri.go:89] found id: ""
	I0920 22:23:34.185751   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.185762   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:34.185775   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:34.185834   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:34.221089   65164 cri.go:89] found id: ""
	I0920 22:23:34.221110   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.221118   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:34.221124   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:34.221167   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:34.256222   65164 cri.go:89] found id: ""
	I0920 22:23:34.256244   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.256252   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:34.256257   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:34.256305   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:34.291640   65164 cri.go:89] found id: ""
	I0920 22:23:34.291665   65164 logs.go:276] 0 containers: []
	W0920 22:23:34.291676   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:34.291688   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:34.291702   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:34.342602   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:34.342642   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:34.356770   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:34.356798   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:34.425461   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:34.425488   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:34.425503   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:34.505552   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:34.505591   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:32.932537   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:35.433994   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:33.778173   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:35.778427   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:37.778562   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:36.866421   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:39.364982   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:37.045098   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:37.058954   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:37.059047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:37.094828   65164 cri.go:89] found id: ""
	I0920 22:23:37.094857   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.094867   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:37.094875   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:37.094935   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:37.133943   65164 cri.go:89] found id: ""
	I0920 22:23:37.133975   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.133985   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:37.133996   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:37.134056   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:37.171931   65164 cri.go:89] found id: ""
	I0920 22:23:37.171976   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.171988   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:37.171996   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:37.172064   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:37.209342   65164 cri.go:89] found id: ""
	I0920 22:23:37.209367   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.209375   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:37.209381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:37.209431   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:37.250832   65164 cri.go:89] found id: ""
	I0920 22:23:37.250866   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.250877   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:37.250885   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:37.250984   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:37.292507   65164 cri.go:89] found id: ""
	I0920 22:23:37.292533   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.292541   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:37.292547   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:37.292603   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:37.326576   65164 cri.go:89] found id: ""
	I0920 22:23:37.326604   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.326612   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:37.326618   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:37.326675   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:37.362983   65164 cri.go:89] found id: ""
	I0920 22:23:37.363010   65164 logs.go:276] 0 containers: []
	W0920 22:23:37.363020   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:37.363030   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:37.363043   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:37.418526   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:37.418564   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:37.434155   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:37.434180   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:37.521264   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:37.521292   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:37.521303   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:37.603628   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:37.603663   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:40.146295   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:40.160861   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:40.160932   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:40.194739   65164 cri.go:89] found id: ""
	I0920 22:23:40.194762   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.194769   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:40.194776   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:40.194836   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:40.230753   65164 cri.go:89] found id: ""
	I0920 22:23:40.230781   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.230792   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:40.230800   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:40.230860   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:40.268363   65164 cri.go:89] found id: ""
	I0920 22:23:40.268392   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.268403   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:40.268411   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:40.268468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:40.307882   65164 cri.go:89] found id: ""
	I0920 22:23:40.307916   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.307925   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:40.307932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:40.307993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:40.344334   65164 cri.go:89] found id: ""
	I0920 22:23:40.344367   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.344378   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:40.344385   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:40.344445   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:40.385113   65164 cri.go:89] found id: ""
	I0920 22:23:40.385146   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.385157   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:40.385168   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:40.385269   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:40.422532   65164 cri.go:89] found id: ""
	I0920 22:23:40.422557   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.422568   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:40.422576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:40.422635   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:40.463425   65164 cri.go:89] found id: ""
	I0920 22:23:40.463451   65164 logs.go:276] 0 containers: []
	W0920 22:23:40.463462   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:40.463472   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:40.463486   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:40.518314   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:40.518350   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:40.534689   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:40.534718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:40.610484   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:40.610509   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:40.610525   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:40.697040   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:40.697076   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:37.930996   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:40.429855   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:42.431458   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:40.279291   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:42.778158   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:41.366031   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:43.367298   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:43.240502   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:43.254888   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:43.254989   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:43.294646   65164 cri.go:89] found id: ""
	I0920 22:23:43.294672   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.294686   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:43.294699   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:43.294763   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:43.330802   65164 cri.go:89] found id: ""
	I0920 22:23:43.330828   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.330839   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:43.330846   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:43.330911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:43.369668   65164 cri.go:89] found id: ""
	I0920 22:23:43.369692   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.369702   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:43.369710   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:43.369780   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:43.409992   65164 cri.go:89] found id: ""
	I0920 22:23:43.410032   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.410045   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:43.410054   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:43.410116   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:43.451838   65164 cri.go:89] found id: ""
	I0920 22:23:43.451870   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.451882   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:43.451889   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:43.451945   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:43.489973   65164 cri.go:89] found id: ""
	I0920 22:23:43.490010   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.490021   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:43.490027   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:43.490087   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:43.529027   65164 cri.go:89] found id: ""
	I0920 22:23:43.529051   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.529060   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:43.529066   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:43.529121   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:43.567324   65164 cri.go:89] found id: ""
	I0920 22:23:43.567354   65164 logs.go:276] 0 containers: []
	W0920 22:23:43.567366   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:43.567377   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:43.567390   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:43.618859   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:43.618899   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:43.632929   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:43.632970   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:43.713084   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:43.713108   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:43.713123   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:43.795977   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:43.796018   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:44.931817   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:46.940596   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:44.778817   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:47.277419   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:45.866458   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:48.365654   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:46.337737   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:46.352031   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:46.352117   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:46.391292   65164 cri.go:89] found id: ""
	I0920 22:23:46.391316   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.391328   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:46.391335   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:46.391397   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:46.432554   65164 cri.go:89] found id: ""
	I0920 22:23:46.432576   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.432584   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:46.432590   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:46.432669   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:46.469609   65164 cri.go:89] found id: ""
	I0920 22:23:46.469638   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.469649   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:46.469657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:46.469718   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:46.509671   65164 cri.go:89] found id: ""
	I0920 22:23:46.509702   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.509714   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:46.509721   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:46.509791   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:46.549707   65164 cri.go:89] found id: ""
	I0920 22:23:46.549737   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.549747   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:46.549755   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:46.549825   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:46.591886   65164 cri.go:89] found id: ""
	I0920 22:23:46.591914   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.591924   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:46.591931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:46.591982   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:46.629358   65164 cri.go:89] found id: ""
	I0920 22:23:46.629384   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.629391   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:46.629397   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:46.629447   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:46.664854   65164 cri.go:89] found id: ""
	I0920 22:23:46.664880   65164 logs.go:276] 0 containers: []
	W0920 22:23:46.664888   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:46.664897   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:46.664908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:46.706941   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:46.706994   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:46.760617   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:46.760651   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:46.774526   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:46.774552   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:46.841947   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:46.841977   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:46.841995   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.423720   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:49.439375   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:49.439466   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:49.476032   65164 cri.go:89] found id: ""
	I0920 22:23:49.476056   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.476064   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:49.476070   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:49.476129   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:49.513099   65164 cri.go:89] found id: ""
	I0920 22:23:49.513128   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.513138   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:49.513144   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:49.513194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:49.551645   65164 cri.go:89] found id: ""
	I0920 22:23:49.551671   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.551682   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:49.551689   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:49.551750   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:49.592269   65164 cri.go:89] found id: ""
	I0920 22:23:49.592295   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.592305   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:49.592312   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:49.592371   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:49.629934   65164 cri.go:89] found id: ""
	I0920 22:23:49.629963   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.629971   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:49.629978   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:49.630033   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:49.668013   65164 cri.go:89] found id: ""
	I0920 22:23:49.668041   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.668051   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:49.668058   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:49.668119   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:49.706531   65164 cri.go:89] found id: ""
	I0920 22:23:49.706561   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.706572   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:49.706580   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:49.706644   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:49.744130   65164 cri.go:89] found id: ""
	I0920 22:23:49.744168   65164 logs.go:276] 0 containers: []
	W0920 22:23:49.744180   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:49.744192   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:49.744212   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:49.814987   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:49.815015   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:49.815031   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:49.897468   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:49.897505   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:49.936778   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:49.936804   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:49.990840   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:49.990884   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:49.430203   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:51.930280   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:49.777628   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.278385   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:50.864440   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.865694   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:52.505554   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:52.520975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:52.521043   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:52.565302   65164 cri.go:89] found id: ""
	I0920 22:23:52.565331   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.565340   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:52.565346   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:52.565404   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:52.606315   65164 cri.go:89] found id: ""
	I0920 22:23:52.606342   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.606352   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:52.606359   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:52.606416   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:52.641533   65164 cri.go:89] found id: ""
	I0920 22:23:52.641560   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.641574   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:52.641581   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:52.641640   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:52.677372   65164 cri.go:89] found id: ""
	I0920 22:23:52.677400   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.677412   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:52.677419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:52.677479   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:52.714016   65164 cri.go:89] found id: ""
	I0920 22:23:52.714056   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.714067   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:52.714075   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:52.714131   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:52.748194   65164 cri.go:89] found id: ""
	I0920 22:23:52.748224   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.748235   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:52.748243   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:52.748306   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:52.785329   65164 cri.go:89] found id: ""
	I0920 22:23:52.785356   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.785366   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:52.785373   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:52.785430   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:52.818350   65164 cri.go:89] found id: ""
	I0920 22:23:52.818382   65164 logs.go:276] 0 containers: []
	W0920 22:23:52.818401   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:52.818414   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:52.818429   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:52.862819   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:52.862854   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:52.914773   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:52.914811   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:52.928560   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:52.928589   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:53.005530   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:53.005553   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:53.005568   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:55.589457   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:55.604545   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:55.604634   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:55.643361   65164 cri.go:89] found id: ""
	I0920 22:23:55.643385   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.643394   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:55.643404   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:55.643458   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:55.678738   65164 cri.go:89] found id: ""
	I0920 22:23:55.678762   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.678770   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:55.678775   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:55.678820   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:55.714107   65164 cri.go:89] found id: ""
	I0920 22:23:55.714137   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.714153   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:55.714160   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:55.714221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:55.749093   65164 cri.go:89] found id: ""
	I0920 22:23:55.749118   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.749127   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:55.749135   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:55.749194   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:55.786138   65164 cri.go:89] found id: ""
	I0920 22:23:55.786166   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.786177   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:55.786185   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:55.786243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:55.820406   65164 cri.go:89] found id: ""
	I0920 22:23:55.820431   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.820440   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:55.820446   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:55.820509   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:55.854792   65164 cri.go:89] found id: ""
	I0920 22:23:55.854820   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.854829   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:55.854834   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:55.854889   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:53.931782   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:56.430505   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:54.278800   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:56.779261   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:55.368778   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:57.865901   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:55.891926   65164 cri.go:89] found id: ""
	I0920 22:23:55.891957   65164 logs.go:276] 0 containers: []
	W0920 22:23:55.891967   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:55.891977   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:55.891991   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:55.947642   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:55.947677   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:55.961872   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:55.961903   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:56.038004   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:56.038035   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:56.038049   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:56.129637   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:56.129682   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:58.670325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:23:58.685501   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:23:58.685564   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:23:58.728232   65164 cri.go:89] found id: ""
	I0920 22:23:58.728262   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.728274   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:23:58.728282   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:23:58.728339   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:23:58.762617   65164 cri.go:89] found id: ""
	I0920 22:23:58.762647   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.762657   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:23:58.762665   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:23:58.762727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:23:58.799427   65164 cri.go:89] found id: ""
	I0920 22:23:58.799454   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.799464   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:23:58.799471   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:23:58.799533   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:23:58.840100   65164 cri.go:89] found id: ""
	I0920 22:23:58.840126   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.840133   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:23:58.840140   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:23:58.840202   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:23:58.879097   65164 cri.go:89] found id: ""
	I0920 22:23:58.879123   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.879131   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:23:58.879137   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:23:58.879185   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:23:58.914839   65164 cri.go:89] found id: ""
	I0920 22:23:58.914879   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.914889   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:23:58.914897   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:23:58.914954   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:23:58.957890   65164 cri.go:89] found id: ""
	I0920 22:23:58.957917   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.957925   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:23:58.957931   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:23:58.957992   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:23:58.996355   65164 cri.go:89] found id: ""
	I0920 22:23:58.996380   65164 logs.go:276] 0 containers: []
	W0920 22:23:58.996390   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:23:58.996400   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:23:58.996419   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:23:59.010139   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:23:59.010162   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:23:59.083543   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:23:59.083567   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:23:59.083584   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:23:59.158430   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:23:59.158470   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:23:59.199488   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:23:59.199515   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:23:58.430584   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:00.431120   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:23:59.278671   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:01.777131   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:00.366154   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:02.866892   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:01.755931   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:01.770610   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:01.770684   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:01.806498   65164 cri.go:89] found id: ""
	I0920 22:24:01.806532   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.806544   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:01.806552   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:01.806611   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:01.849461   65164 cri.go:89] found id: ""
	I0920 22:24:01.849495   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.849505   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:01.849512   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:01.849570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:01.886634   65164 cri.go:89] found id: ""
	I0920 22:24:01.886674   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.886686   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:01.886693   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:01.886756   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:01.921036   65164 cri.go:89] found id: ""
	I0920 22:24:01.921064   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.921072   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:01.921079   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:01.921138   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:01.959066   65164 cri.go:89] found id: ""
	I0920 22:24:01.959097   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.959109   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:01.959116   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:01.959182   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:01.997105   65164 cri.go:89] found id: ""
	I0920 22:24:01.997135   65164 logs.go:276] 0 containers: []
	W0920 22:24:01.997146   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:01.997155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:01.997215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:02.039220   65164 cri.go:89] found id: ""
	I0920 22:24:02.039245   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.039254   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:02.039259   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:02.039318   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:02.080646   65164 cri.go:89] found id: ""
	I0920 22:24:02.080675   65164 logs.go:276] 0 containers: []
	W0920 22:24:02.080684   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:02.080691   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:02.080704   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:02.132408   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:02.132442   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:02.147039   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:02.147065   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:02.226357   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:02.226383   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:02.226402   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:02.310492   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:02.310527   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:04.854325   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:04.869319   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:04.869386   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:04.904951   65164 cri.go:89] found id: ""
	I0920 22:24:04.904981   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.904992   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:04.905001   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:04.905062   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:04.942283   65164 cri.go:89] found id: ""
	I0920 22:24:04.942307   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.942314   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:04.942319   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:04.942366   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:04.981814   65164 cri.go:89] found id: ""
	I0920 22:24:04.981841   65164 logs.go:276] 0 containers: []
	W0920 22:24:04.981851   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:04.981858   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:04.981914   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:05.016282   65164 cri.go:89] found id: ""
	I0920 22:24:05.016307   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.016316   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:05.016321   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:05.016369   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:05.059185   65164 cri.go:89] found id: ""
	I0920 22:24:05.059215   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.059226   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:05.059233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:05.059286   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:05.097814   65164 cri.go:89] found id: ""
	I0920 22:24:05.097836   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.097845   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:05.097851   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:05.097904   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:05.133890   65164 cri.go:89] found id: ""
	I0920 22:24:05.133918   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.133926   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:05.133932   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:05.133993   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:05.170616   65164 cri.go:89] found id: ""
	I0920 22:24:05.170641   65164 logs.go:276] 0 containers: []
	W0920 22:24:05.170649   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:05.170657   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:05.170668   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:05.221889   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:05.221921   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:05.235836   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:05.235863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:05.312145   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:05.312172   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:05.312188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:05.397092   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:05.397128   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:02.930186   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:04.931380   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.429840   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:03.778224   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:06.278765   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:08.279756   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:05.365583   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.865287   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:09.865560   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:07.938892   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:07.954037   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:07.954103   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:07.992847   65164 cri.go:89] found id: ""
	I0920 22:24:07.992872   65164 logs.go:276] 0 containers: []
	W0920 22:24:07.992880   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:07.992887   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:07.992971   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:08.030102   65164 cri.go:89] found id: ""
	I0920 22:24:08.030136   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.030148   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:08.030155   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:08.030215   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:08.071156   65164 cri.go:89] found id: ""
	I0920 22:24:08.071187   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.071202   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:08.071209   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:08.071264   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:08.111668   65164 cri.go:89] found id: ""
	I0920 22:24:08.111699   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.111712   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:08.111719   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:08.111777   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:08.148289   65164 cri.go:89] found id: ""
	I0920 22:24:08.148322   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.148333   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:08.148340   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:08.148402   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:08.189154   65164 cri.go:89] found id: ""
	I0920 22:24:08.189176   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.189184   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:08.189191   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:08.189242   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:08.225338   65164 cri.go:89] found id: ""
	I0920 22:24:08.225363   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.225374   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:08.225381   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:08.225439   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:08.274986   65164 cri.go:89] found id: ""
	I0920 22:24:08.275015   65164 logs.go:276] 0 containers: []
	W0920 22:24:08.275026   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:08.275036   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:08.275052   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:08.376769   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:08.376801   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:08.376819   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:08.455971   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:08.456008   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:08.496996   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:08.497026   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:08.549634   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:08.549669   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:09.930508   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:12.430716   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:10.778077   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:13.278116   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:12.365884   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:14.866612   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:11.065317   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:11.079438   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:11.079521   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:11.113628   65164 cri.go:89] found id: ""
	I0920 22:24:11.113657   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.113673   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:11.113680   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:11.113741   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:11.147698   65164 cri.go:89] found id: ""
	I0920 22:24:11.147727   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.147738   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:11.147745   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:11.147810   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:11.185153   65164 cri.go:89] found id: ""
	I0920 22:24:11.185183   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.185194   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:11.185202   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:11.185261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:11.220575   65164 cri.go:89] found id: ""
	I0920 22:24:11.220602   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.220611   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:11.220619   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:11.220677   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:11.257062   65164 cri.go:89] found id: ""
	I0920 22:24:11.257095   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.257107   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:11.257117   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:11.257174   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:11.295945   65164 cri.go:89] found id: ""
	I0920 22:24:11.295972   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.295982   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:11.295987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:11.296037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:11.335049   65164 cri.go:89] found id: ""
	I0920 22:24:11.335072   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.335080   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:11.335085   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:11.335134   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:11.377047   65164 cri.go:89] found id: ""
	I0920 22:24:11.377074   65164 logs.go:276] 0 containers: []
	W0920 22:24:11.377085   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:11.377094   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:11.377108   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:11.430338   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:11.430364   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:11.445225   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:11.445249   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:11.517988   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:11.518010   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:11.518024   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:11.596776   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:11.596813   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:14.140982   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:14.158445   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:14.158524   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:14.208478   65164 cri.go:89] found id: ""
	I0920 22:24:14.208508   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.208518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:14.208526   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:14.208585   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:14.266224   65164 cri.go:89] found id: ""
	I0920 22:24:14.266256   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.266268   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:14.266276   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:14.266334   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:14.309917   65164 cri.go:89] found id: ""
	I0920 22:24:14.309943   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.309954   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:14.309968   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:14.310026   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:14.348491   65164 cri.go:89] found id: ""
	I0920 22:24:14.348516   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.348523   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:14.348530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:14.348575   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:14.389170   65164 cri.go:89] found id: ""
	I0920 22:24:14.389197   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.389205   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:14.389212   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:14.389261   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:14.426006   65164 cri.go:89] found id: ""
	I0920 22:24:14.426034   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.426045   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:14.426052   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:14.426113   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:14.461496   65164 cri.go:89] found id: ""
	I0920 22:24:14.461519   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.461526   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:14.461532   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:14.461588   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:14.501424   65164 cri.go:89] found id: ""
	I0920 22:24:14.501457   65164 logs.go:276] 0 containers: []
	W0920 22:24:14.501474   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:14.501486   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:14.501499   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:14.550920   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:14.550954   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:14.565293   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:14.565322   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:14.638031   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:14.638059   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:14.638071   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:14.715631   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:14.715697   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:14.430748   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:16.930571   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:15.777268   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.778385   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.365248   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:19.366168   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:17.256947   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:17.269939   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:17.270009   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:17.305349   65164 cri.go:89] found id: ""
	I0920 22:24:17.305373   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.305384   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:17.305391   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:17.305453   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:17.347917   65164 cri.go:89] found id: ""
	I0920 22:24:17.347947   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.347959   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:17.347967   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:17.348024   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:17.383366   65164 cri.go:89] found id: ""
	I0920 22:24:17.383387   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.383396   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:17.383402   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:17.383462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:17.418055   65164 cri.go:89] found id: ""
	I0920 22:24:17.418079   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.418086   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:17.418092   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:17.418156   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:17.457651   65164 cri.go:89] found id: ""
	I0920 22:24:17.457680   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.457691   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:17.457698   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:17.457758   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:17.492527   65164 cri.go:89] found id: ""
	I0920 22:24:17.492557   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.492568   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:17.492576   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:17.492627   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:17.528964   65164 cri.go:89] found id: ""
	I0920 22:24:17.528993   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.529004   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:17.529017   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:17.529080   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:17.565286   65164 cri.go:89] found id: ""
	I0920 22:24:17.565315   65164 logs.go:276] 0 containers: []
	W0920 22:24:17.565326   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:17.565336   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:17.565351   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:17.617255   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:17.617290   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:17.631897   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:17.631925   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:17.700829   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:17.700851   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:17.700863   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:17.782597   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:17.782633   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.327305   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:20.340641   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:20.340731   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:20.378129   65164 cri.go:89] found id: ""
	I0920 22:24:20.378156   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.378167   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:20.378174   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:20.378228   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:20.414202   65164 cri.go:89] found id: ""
	I0920 22:24:20.414230   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.414240   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:20.414247   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:20.414307   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:20.451599   65164 cri.go:89] found id: ""
	I0920 22:24:20.451631   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.451641   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:20.451649   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:20.451708   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:20.486998   65164 cri.go:89] found id: ""
	I0920 22:24:20.487026   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.487034   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:20.487040   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:20.487091   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:20.525623   65164 cri.go:89] found id: ""
	I0920 22:24:20.525651   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.525661   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:20.525669   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:20.525727   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:20.560930   65164 cri.go:89] found id: ""
	I0920 22:24:20.560956   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.560967   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:20.560975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:20.561037   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:20.598358   65164 cri.go:89] found id: ""
	I0920 22:24:20.598385   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.598396   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:20.598403   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:20.598468   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:20.633645   65164 cri.go:89] found id: ""
	I0920 22:24:20.633671   65164 logs.go:276] 0 containers: []
	W0920 22:24:20.633683   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:20.633694   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:20.633707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:20.678847   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:20.678880   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:20.731758   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:20.731803   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:20.747298   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:20.747328   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:20.822433   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:20.822459   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:20.822473   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:18.932797   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:21.430477   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:19.778546   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:22.277786   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:21.866777   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:24.365161   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:23.401037   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:23.414055   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:23.414135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:23.452483   65164 cri.go:89] found id: ""
	I0920 22:24:23.452511   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.452522   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:23.452529   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:23.452587   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:23.491951   65164 cri.go:89] found id: ""
	I0920 22:24:23.491979   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.491987   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:23.491993   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:23.492039   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:23.533381   65164 cri.go:89] found id: ""
	I0920 22:24:23.533413   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.533424   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:23.533432   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:23.533492   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:23.572442   65164 cri.go:89] found id: ""
	I0920 22:24:23.572475   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.572485   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:23.572493   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:23.572554   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:23.609161   65164 cri.go:89] found id: ""
	I0920 22:24:23.609189   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.609200   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:23.609207   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:23.609272   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:23.645692   65164 cri.go:89] found id: ""
	I0920 22:24:23.645721   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.645730   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:23.645737   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:23.645824   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:23.681332   65164 cri.go:89] found id: ""
	I0920 22:24:23.681355   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.681363   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:23.681368   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:23.681414   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:23.716497   65164 cri.go:89] found id: ""
	I0920 22:24:23.716524   65164 logs.go:276] 0 containers: []
	W0920 22:24:23.716532   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:23.716541   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:23.716551   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:23.794446   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:23.794485   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:23.839598   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:23.839625   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:23.893259   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:23.893295   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:23.907430   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:23.907456   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:23.982193   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:23.431805   64543 pod_ready.go:103] pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:25.430647   64543 pod_ready.go:82] duration metric: took 4m0.006693945s for pod "metrics-server-6867b74b74-4j9zb" in "kube-system" namespace to be "Ready" ...
	E0920 22:24:25.430668   64543 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 22:24:25.430675   64543 pod_ready.go:39] duration metric: took 4m7.427029914s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:24:25.430689   64543 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:24:25.430712   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:25.430753   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:25.478295   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:25.478324   64543 cri.go:89] found id: ""
	I0920 22:24:25.478334   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:25.478395   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.483530   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:25.483600   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:25.525758   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:25.525791   64543 cri.go:89] found id: ""
	I0920 22:24:25.525801   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:25.525855   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.530200   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:25.530272   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:25.581889   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:25.581911   64543 cri.go:89] found id: ""
	I0920 22:24:25.581920   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:25.581976   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.587886   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:25.587948   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:25.634412   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:25.634438   64543 cri.go:89] found id: ""
	I0920 22:24:25.634448   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:25.634508   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.638844   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:25.638907   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:25.674015   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:25.674040   64543 cri.go:89] found id: ""
	I0920 22:24:25.674049   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:25.674109   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.678865   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:25.678935   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:25.715353   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:25.715380   64543 cri.go:89] found id: ""
	I0920 22:24:25.715390   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:25.715445   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.719967   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:25.720038   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:25.759899   64543 cri.go:89] found id: ""
	I0920 22:24:25.759925   64543 logs.go:276] 0 containers: []
	W0920 22:24:25.759933   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:25.759938   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:25.759996   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:25.805111   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:25.805139   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:25.805146   64543 cri.go:89] found id: ""
	I0920 22:24:25.805154   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:25.805225   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.809639   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:25.813697   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:25.813722   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:25.853582   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:25.853610   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:25.893403   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:25.893450   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:25.939559   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:25.939601   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:25.957288   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:25.957322   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:26.007398   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:26.007427   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:26.048801   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:26.048828   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:26.105292   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:26.105327   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:26.141333   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:26.141367   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:26.659413   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:26.659458   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:26.719990   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:26.720018   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:26.797434   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:26.797480   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:26.956602   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:26.956645   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:24.278169   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.278275   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:28.278844   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.365203   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:28.366087   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:26.482877   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:26.497511   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:26.497590   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:26.534711   65164 cri.go:89] found id: ""
	I0920 22:24:26.534732   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.534739   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:26.534745   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:26.534790   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:26.569664   65164 cri.go:89] found id: ""
	I0920 22:24:26.569696   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.569706   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:26.569714   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:26.569772   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:26.610623   65164 cri.go:89] found id: ""
	I0920 22:24:26.610651   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.610663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:26.610671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:26.610726   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:26.648337   65164 cri.go:89] found id: ""
	I0920 22:24:26.648368   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.648379   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:26.648387   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:26.648446   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:26.687633   65164 cri.go:89] found id: ""
	I0920 22:24:26.687661   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.687673   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:26.687680   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:26.687752   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:26.724361   65164 cri.go:89] found id: ""
	I0920 22:24:26.724390   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.724410   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:26.724419   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:26.724475   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:26.766101   65164 cri.go:89] found id: ""
	I0920 22:24:26.766127   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.766137   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:26.766144   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:26.766196   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:26.805844   65164 cri.go:89] found id: ""
	I0920 22:24:26.805872   65164 logs.go:276] 0 containers: []
	W0920 22:24:26.805883   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:26.805894   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:26.805907   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:26.880703   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:26.880725   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:26.880740   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:26.967599   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:26.967635   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:27.017359   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:27.017384   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:27.071335   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:27.071375   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:29.588374   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:29.602233   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:29.602308   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:29.639528   65164 cri.go:89] found id: ""
	I0920 22:24:29.639551   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.639562   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:29.639570   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:29.639626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:29.677766   65164 cri.go:89] found id: ""
	I0920 22:24:29.677794   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.677810   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:29.677817   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:29.677874   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:29.714616   65164 cri.go:89] found id: ""
	I0920 22:24:29.714652   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.714663   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:29.714671   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:29.714722   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:29.750760   65164 cri.go:89] found id: ""
	I0920 22:24:29.750787   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.750795   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:29.750802   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:29.750862   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:29.791014   65164 cri.go:89] found id: ""
	I0920 22:24:29.791046   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.791057   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:29.791064   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:29.791135   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:29.827114   65164 cri.go:89] found id: ""
	I0920 22:24:29.827146   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.827158   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:29.827166   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:29.827221   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:29.866063   65164 cri.go:89] found id: ""
	I0920 22:24:29.866087   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.866097   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:29.866105   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:29.866159   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:29.905616   65164 cri.go:89] found id: ""
	I0920 22:24:29.905646   65164 logs.go:276] 0 containers: []
	W0920 22:24:29.905657   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:29.905668   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:29.905681   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:29.990749   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:29.990787   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:30.030931   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:30.030975   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:30.084404   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:30.084440   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:30.100878   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:30.100908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:30.175423   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:29.517258   64543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:29.535431   64543 api_server.go:72] duration metric: took 4m18.790198719s to wait for apiserver process to appear ...
	I0920 22:24:29.535457   64543 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:24:29.535493   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:29.535549   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:29.581802   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:29.581828   64543 cri.go:89] found id: ""
	I0920 22:24:29.581836   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:29.581890   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.590758   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:29.590827   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:29.636548   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:29.636574   64543 cri.go:89] found id: ""
	I0920 22:24:29.636583   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:29.636644   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.641309   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:29.641378   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:29.686989   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:29.687017   64543 cri.go:89] found id: ""
	I0920 22:24:29.687026   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:29.687086   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.691971   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:29.692056   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:29.733852   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:29.733880   64543 cri.go:89] found id: ""
	I0920 22:24:29.733890   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:29.733954   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.738246   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:29.738322   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:29.786495   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:29.786520   64543 cri.go:89] found id: ""
	I0920 22:24:29.786527   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:29.786580   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.791112   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:29.791170   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:29.836075   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:29.836098   64543 cri.go:89] found id: ""
	I0920 22:24:29.836106   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:29.836150   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.840577   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:29.840652   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:29.882111   64543 cri.go:89] found id: ""
	I0920 22:24:29.882142   64543 logs.go:276] 0 containers: []
	W0920 22:24:29.882154   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:29.882161   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:29.882247   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:29.926138   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:29.926163   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:29.926169   64543 cri.go:89] found id: ""
	I0920 22:24:29.926177   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:29.926235   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.930772   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:29.934923   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:29.934952   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:30.052134   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:30.052171   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:30.101772   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:30.101795   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:30.143728   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:30.143760   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:30.189176   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:30.189204   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:30.246393   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:30.246423   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:30.289832   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:30.289864   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:30.712613   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:30.712648   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:30.783930   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:30.783962   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:30.838078   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:30.838112   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:30.889200   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:30.889235   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:30.924455   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:30.924492   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:30.961916   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:30.961945   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:30.278927   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.777947   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:30.867260   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.868094   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:32.675815   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:32.689449   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:32.689517   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:32.725045   65164 cri.go:89] found id: ""
	I0920 22:24:32.725072   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.725080   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:32.725086   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:32.725139   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:32.760529   65164 cri.go:89] found id: ""
	I0920 22:24:32.760555   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.760566   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:32.760572   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:32.760629   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:32.795500   65164 cri.go:89] found id: ""
	I0920 22:24:32.795526   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.795536   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:32.795543   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:32.795604   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:32.834854   65164 cri.go:89] found id: ""
	I0920 22:24:32.834884   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.834895   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:32.834904   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:32.834972   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:32.875928   65164 cri.go:89] found id: ""
	I0920 22:24:32.875956   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.875967   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:32.875975   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:32.876038   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:32.910115   65164 cri.go:89] found id: ""
	I0920 22:24:32.910140   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.910149   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:32.910155   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:32.910204   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:32.944790   65164 cri.go:89] found id: ""
	I0920 22:24:32.944817   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.944828   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:32.944835   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:32.944896   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:32.980012   65164 cri.go:89] found id: ""
	I0920 22:24:32.980043   65164 logs.go:276] 0 containers: []
	W0920 22:24:32.980054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:32.980064   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:32.980077   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:33.032827   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:33.032864   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:33.047478   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:33.047507   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:33.124253   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:33.124280   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:33.124294   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:33.200625   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:33.200666   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:35.740556   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:35.755559   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:35.755626   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:35.796164   65164 cri.go:89] found id: ""
	I0920 22:24:35.796191   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.796200   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:35.796211   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:35.796260   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:35.832565   65164 cri.go:89] found id: ""
	I0920 22:24:35.832591   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.832601   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:35.832609   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:35.832678   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:33.481353   64543 api_server.go:253] Checking apiserver healthz at https://192.168.50.193:8443/healthz ...
	I0920 22:24:33.486802   64543 api_server.go:279] https://192.168.50.193:8443/healthz returned 200:
	ok
	I0920 22:24:33.487816   64543 api_server.go:141] control plane version: v1.31.1
	I0920 22:24:33.487835   64543 api_server.go:131] duration metric: took 3.952371649s to wait for apiserver health ...
	I0920 22:24:33.487842   64543 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:24:33.487865   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:33.487917   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:33.528958   64543 cri.go:89] found id: "15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:33.528989   64543 cri.go:89] found id: ""
	I0920 22:24:33.528999   64543 logs.go:276] 1 containers: [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097]
	I0920 22:24:33.529061   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.533727   64543 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:33.533802   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:33.573912   64543 cri.go:89] found id: "de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:33.573937   64543 cri.go:89] found id: ""
	I0920 22:24:33.573947   64543 logs.go:276] 1 containers: [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f]
	I0920 22:24:33.574016   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.578918   64543 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:33.578998   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:33.628384   64543 cri.go:89] found id: "c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:33.628416   64543 cri.go:89] found id: ""
	I0920 22:24:33.628427   64543 logs.go:276] 1 containers: [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1]
	I0920 22:24:33.628490   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.632875   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:33.632955   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:33.674561   64543 cri.go:89] found id: "80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:33.674588   64543 cri.go:89] found id: ""
	I0920 22:24:33.674597   64543 logs.go:276] 1 containers: [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b]
	I0920 22:24:33.674654   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.679984   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:33.680055   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:33.715881   64543 cri.go:89] found id: "e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:33.715906   64543 cri.go:89] found id: ""
	I0920 22:24:33.715914   64543 logs.go:276] 1 containers: [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640]
	I0920 22:24:33.715963   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.720339   64543 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:33.720408   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:33.764661   64543 cri.go:89] found id: "065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:33.764680   64543 cri.go:89] found id: ""
	I0920 22:24:33.764687   64543 logs.go:276] 1 containers: [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a]
	I0920 22:24:33.764734   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.769479   64543 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:33.769533   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:33.810893   64543 cri.go:89] found id: ""
	I0920 22:24:33.810920   64543 logs.go:276] 0 containers: []
	W0920 22:24:33.810932   64543 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:33.810950   64543 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0920 22:24:33.811023   64543 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 22:24:33.848882   64543 cri.go:89] found id: "4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:33.848904   64543 cri.go:89] found id: "a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:33.848908   64543 cri.go:89] found id: ""
	I0920 22:24:33.848914   64543 logs.go:276] 2 containers: [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba]
	I0920 22:24:33.848964   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.853597   64543 ssh_runner.go:195] Run: which crictl
	I0920 22:24:33.858054   64543 logs.go:123] Gathering logs for kube-apiserver [15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097] ...
	I0920 22:24:33.858074   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 15ee26017cb998e6a6dc149c96dca2d527647ce6074b88a705b4a3952137b097"
	I0920 22:24:33.908602   64543 logs.go:123] Gathering logs for kube-proxy [e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640] ...
	I0920 22:24:33.908636   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7fe3b2aa3fbf0a9e3181910392caa579e639d49c5fc0a485fd239ff9db15640"
	I0920 22:24:33.953823   64543 logs.go:123] Gathering logs for storage-provisioner [a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba] ...
	I0920 22:24:33.953850   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70c22cc89decbc16fd4ae497c97baadbb13d3eb2aea093323de14f2ef5b7dba"
	I0920 22:24:33.993841   64543 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:33.993868   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:34.009784   64543 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:34.009808   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 22:24:34.134468   64543 logs.go:123] Gathering logs for etcd [de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f] ...
	I0920 22:24:34.134496   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 de480c158751f5e7fe71607f5bfa335e12eee62bf7b2d6084651254334591b7f"
	I0920 22:24:34.178737   64543 logs.go:123] Gathering logs for coredns [c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1] ...
	I0920 22:24:34.178772   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c6e4193ea7db377033fb5dd5fd1efad9ceb59a686c7d42ab5a06e1475f4187e1"
	I0920 22:24:34.218475   64543 logs.go:123] Gathering logs for kube-scheduler [80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b] ...
	I0920 22:24:34.218519   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80549c2470100875dd70110dae548963948b20f73c32960debeb26da1081fe0b"
	I0920 22:24:34.261246   64543 logs.go:123] Gathering logs for kube-controller-manager [065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a] ...
	I0920 22:24:34.261272   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 065928d7289f256285abfac03a28e7318c68050080a27ce3e67a72c6364cd68a"
	I0920 22:24:34.324980   64543 logs.go:123] Gathering logs for storage-provisioner [4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0] ...
	I0920 22:24:34.325018   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cb8dacd4796acac637bb327a5e5d0a5b2bab132dac8e556313cf65247aca0d0"
	I0920 22:24:34.364223   64543 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:34.364249   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:34.730213   64543 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:34.730262   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:34.808747   64543 logs.go:123] Gathering logs for container status ...
	I0920 22:24:34.808786   64543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:37.357880   64543 system_pods.go:59] 8 kube-system pods found
	I0920 22:24:37.357914   64543 system_pods.go:61] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running
	I0920 22:24:37.357919   64543 system_pods.go:61] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running
	I0920 22:24:37.357923   64543 system_pods.go:61] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running
	I0920 22:24:37.357928   64543 system_pods.go:61] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running
	I0920 22:24:37.357933   64543 system_pods.go:61] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:24:37.357938   64543 system_pods.go:61] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running
	I0920 22:24:37.357946   64543 system_pods.go:61] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:24:37.357960   64543 system_pods.go:61] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:24:37.357969   64543 system_pods.go:74] duration metric: took 3.870121418s to wait for pod list to return data ...
	I0920 22:24:37.357979   64543 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:24:37.361128   64543 default_sa.go:45] found service account: "default"
	I0920 22:24:37.361152   64543 default_sa.go:55] duration metric: took 3.165479ms for default service account to be created ...
	I0920 22:24:37.361160   64543 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:24:37.366724   64543 system_pods.go:86] 8 kube-system pods found
	I0920 22:24:37.366755   64543 system_pods.go:89] "coredns-7c65d6cfc9-46wxm" [9bfc28b7-4bdf-4f37-82fb-59316bec0274] Running
	I0920 22:24:37.366764   64543 system_pods.go:89] "etcd-embed-certs-562750" [bb2b2d6f-9fcc-4838-8fbc-f07dc46efc5b] Running
	I0920 22:24:37.366771   64543 system_pods.go:89] "kube-apiserver-embed-certs-562750" [b3fc30c3-cbfa-43ac-b085-08e65dd9a6b1] Running
	I0920 22:24:37.366777   64543 system_pods.go:89] "kube-controller-manager-embed-certs-562750" [8d289107-f3ea-419a-9dde-e90cd0e0bc78] Running
	I0920 22:24:37.366781   64543 system_pods.go:89] "kube-proxy-wsrnw" [173d3a74-5546-4d40-8111-0607c3cbdbba] Running
	I0920 22:24:37.366785   64543 system_pods.go:89] "kube-scheduler-embed-certs-562750" [0dc00060-10fd-48ab-b922-6e50a7be4c43] Running
	I0920 22:24:37.366791   64543 system_pods.go:89] "metrics-server-6867b74b74-4j9zb" [cce79400-7958-4b2f-bcb0-2e59046134cd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:24:37.366795   64543 system_pods.go:89] "storage-provisioner" [12d59a28-0756-4027-bd21-97fdc86a8137] Running
	I0920 22:24:37.366803   64543 system_pods.go:126] duration metric: took 5.637987ms to wait for k8s-apps to be running ...
	I0920 22:24:37.366809   64543 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:24:37.366865   64543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:24:37.385292   64543 system_svc.go:56] duration metric: took 18.47174ms WaitForService to wait for kubelet
	I0920 22:24:37.385326   64543 kubeadm.go:582] duration metric: took 4m26.64009895s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:24:37.385352   64543 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:24:37.388635   64543 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:24:37.388661   64543 node_conditions.go:123] node cpu capacity is 2
	I0920 22:24:37.388675   64543 node_conditions.go:105] duration metric: took 3.31714ms to run NodePressure ...
	I0920 22:24:37.388690   64543 start.go:241] waiting for startup goroutines ...
	I0920 22:24:37.388699   64543 start.go:246] waiting for cluster config update ...
	I0920 22:24:37.388717   64543 start.go:255] writing updated cluster config ...
	I0920 22:24:37.389073   64543 ssh_runner.go:195] Run: rm -f paused
	I0920 22:24:37.442347   64543 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:24:37.443855   64543 out.go:177] * Done! kubectl is now configured to use "embed-certs-562750" cluster and "default" namespace by default
	I0920 22:24:34.779041   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:36.779446   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:35.365819   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:37.865951   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:35.868115   65164 cri.go:89] found id: ""
	I0920 22:24:35.868138   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.868145   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:35.868152   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:35.868199   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:35.902417   65164 cri.go:89] found id: ""
	I0920 22:24:35.902447   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.902457   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:35.902465   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:35.902531   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:35.939195   65164 cri.go:89] found id: ""
	I0920 22:24:35.939226   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.939242   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:35.939256   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:35.939317   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:35.981364   65164 cri.go:89] found id: ""
	I0920 22:24:35.981387   65164 logs.go:276] 0 containers: []
	W0920 22:24:35.981397   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:35.981405   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:35.981462   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:36.019938   65164 cri.go:89] found id: ""
	I0920 22:24:36.019968   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.019979   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:36.019987   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:36.020049   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:36.058764   65164 cri.go:89] found id: ""
	I0920 22:24:36.058788   65164 logs.go:276] 0 containers: []
	W0920 22:24:36.058795   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:36.058803   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:36.058812   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:36.098442   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:36.098468   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:36.152150   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:36.152188   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:36.168940   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:36.168972   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:36.242112   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:36.242134   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:36.242148   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:38.843420   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:38.857665   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:38.857746   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:38.901473   65164 cri.go:89] found id: ""
	I0920 22:24:38.901507   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.901518   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:38.901532   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:38.901591   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:38.937980   65164 cri.go:89] found id: ""
	I0920 22:24:38.938012   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.938022   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:38.938030   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:38.938085   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:38.973476   65164 cri.go:89] found id: ""
	I0920 22:24:38.973506   65164 logs.go:276] 0 containers: []
	W0920 22:24:38.973514   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:38.973520   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:38.973579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:39.009144   65164 cri.go:89] found id: ""
	I0920 22:24:39.009177   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.009197   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:39.009205   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:39.009282   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:39.046471   65164 cri.go:89] found id: ""
	I0920 22:24:39.046495   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.046505   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:39.046513   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:39.046570   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:39.080274   65164 cri.go:89] found id: ""
	I0920 22:24:39.080303   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.080315   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:39.080323   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:39.080381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:39.117267   65164 cri.go:89] found id: ""
	I0920 22:24:39.117297   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.117307   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:39.117314   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:39.117381   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:39.158015   65164 cri.go:89] found id: ""
	I0920 22:24:39.158043   65164 logs.go:276] 0 containers: []
	W0920 22:24:39.158054   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:39.158065   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:39.158079   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:39.210324   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:39.210360   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:39.226827   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:39.226857   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:39.299060   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:39.299088   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:39.299157   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:39.386377   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:39.386420   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:39.277971   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:41.278082   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:43.278136   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:40.365418   64930 pod_ready.go:103] pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:41.359063   64930 pod_ready.go:82] duration metric: took 4m0.000372202s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" ...
	E0920 22:24:41.359104   64930 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-4xsdw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 22:24:41.359120   64930 pod_ready.go:39] duration metric: took 4m12.013463696s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:24:41.359144   64930 kubeadm.go:597] duration metric: took 4m19.989072639s to restartPrimaryControlPlane
	W0920 22:24:41.359208   64930 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:24:41.359236   64930 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:24:41.925762   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:41.939840   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:41.939911   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:41.979910   65164 cri.go:89] found id: ""
	I0920 22:24:41.979943   65164 logs.go:276] 0 containers: []
	W0920 22:24:41.979964   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:41.979973   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:41.980036   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:42.018711   65164 cri.go:89] found id: ""
	I0920 22:24:42.018738   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.018747   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:42.018752   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:42.018796   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:42.056718   65164 cri.go:89] found id: ""
	I0920 22:24:42.056741   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.056749   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:42.056756   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:42.056826   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:42.093609   65164 cri.go:89] found id: ""
	I0920 22:24:42.093639   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.093650   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:42.093657   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:42.093729   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:42.130129   65164 cri.go:89] found id: ""
	I0920 22:24:42.130157   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.130167   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:42.130175   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:42.130233   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:42.166747   65164 cri.go:89] found id: ""
	I0920 22:24:42.166796   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.166807   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:42.166815   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:42.166875   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:42.213926   65164 cri.go:89] found id: ""
	I0920 22:24:42.213960   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.213971   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:42.213982   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:42.214047   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:42.251645   65164 cri.go:89] found id: ""
	I0920 22:24:42.251671   65164 logs.go:276] 0 containers: []
	W0920 22:24:42.251681   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:42.251693   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:42.251707   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:42.265133   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:42.265165   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:42.340018   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:42.340044   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:42.340058   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:42.435006   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:42.435056   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:42.478991   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:42.479022   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:45.032066   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:45.048658   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:24:45.048734   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:24:45.086708   65164 cri.go:89] found id: ""
	I0920 22:24:45.086733   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.086743   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:24:45.086750   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:24:45.086815   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:24:45.124311   65164 cri.go:89] found id: ""
	I0920 22:24:45.124342   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.124353   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:24:45.124360   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:24:45.124407   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:24:45.164253   65164 cri.go:89] found id: ""
	I0920 22:24:45.164283   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.164293   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:24:45.164301   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:24:45.164364   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:24:45.206382   65164 cri.go:89] found id: ""
	I0920 22:24:45.206412   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.206421   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:24:45.206429   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:24:45.206494   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:24:45.241949   65164 cri.go:89] found id: ""
	I0920 22:24:45.241977   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.241986   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:24:45.241991   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:24:45.242052   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:24:45.283718   65164 cri.go:89] found id: ""
	I0920 22:24:45.283743   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.283752   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:24:45.283757   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:24:45.283812   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:24:45.331523   65164 cri.go:89] found id: ""
	I0920 22:24:45.331564   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.331576   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:24:45.331585   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:24:45.331672   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:24:45.372428   65164 cri.go:89] found id: ""
	I0920 22:24:45.372459   65164 logs.go:276] 0 containers: []
	W0920 22:24:45.372469   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:24:45.372479   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:24:45.372496   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:24:45.387751   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:24:45.387786   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:24:45.470354   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:24:45.470377   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:24:45.470392   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:24:45.557865   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:24:45.557902   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 22:24:45.600123   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:24:45.600153   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:24:45.278256   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:47.278977   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:48.153277   65164 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:24:48.167796   65164 kubeadm.go:597] duration metric: took 4m4.375477562s to restartPrimaryControlPlane
	W0920 22:24:48.167875   65164 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:24:48.167906   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:24:48.626343   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:24:48.644488   65164 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:24:48.655891   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:24:48.666885   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:24:48.666910   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:24:48.666985   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:24:48.676979   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:24:48.677048   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:24:48.687890   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:24:48.697890   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:24:48.697955   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:24:48.708412   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.719024   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:24:48.719093   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:24:48.729321   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:24:48.739258   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:24:48.739329   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:24:48.749787   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:24:48.825173   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:24:48.825258   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:24:48.982428   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:24:48.982587   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:24:48.982721   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:24:49.194912   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:24:49.196899   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:24:49.196999   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:24:49.197078   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:24:49.197222   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:24:49.197328   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:24:49.197421   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:24:49.197495   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:24:49.197574   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:24:49.197662   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:24:49.197761   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:24:49.197887   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:24:49.197948   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:24:49.198036   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:24:49.468998   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:24:49.613979   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:24:49.868901   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:24:50.071143   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:24:50.093179   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:24:50.094030   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:24:50.094086   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:24:50.232993   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:24:50.234761   65164 out.go:235]   - Booting up control plane ...
	I0920 22:24:50.234871   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:24:50.247358   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:24:50.248714   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:24:50.249690   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:24:50.252874   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:24:49.778284   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:52.277872   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:54.278087   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:56.278439   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:24:58.777573   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:00.777673   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:02.777879   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.503130   64930 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.143868713s)
	I0920 22:25:07.503224   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:07.519558   64930 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:25:07.530384   64930 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:25:07.540441   64930 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:25:07.540469   64930 kubeadm.go:157] found existing configuration files:
	
	I0920 22:25:07.540521   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0920 22:25:07.550627   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:25:07.550694   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:25:07.560193   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0920 22:25:07.569908   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:25:07.569966   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:25:07.580326   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0920 22:25:07.592893   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:25:07.592969   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:25:07.603183   64930 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0920 22:25:07.613494   64930 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:25:07.613558   64930 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:25:07.623271   64930 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:25:07.671307   64930 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:25:07.671611   64930 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:25:07.801578   64930 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:25:07.801721   64930 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:25:07.801867   64930 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:25:07.811576   64930 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:25:04.778293   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.278844   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:07.814524   64930 out.go:235]   - Generating certificates and keys ...
	I0920 22:25:07.814630   64930 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:25:07.814714   64930 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:25:07.814828   64930 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:25:07.814911   64930 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:25:07.815048   64930 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:25:07.815155   64930 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:25:07.815261   64930 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:25:07.815374   64930 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:25:07.815473   64930 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:25:07.815580   64930 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:25:07.815640   64930 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:25:07.815728   64930 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:25:08.129976   64930 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:25:08.305286   64930 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:25:08.489838   64930 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:25:08.656531   64930 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:25:08.909653   64930 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:25:08.910197   64930 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:25:08.913016   64930 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:25:08.914949   64930 out.go:235]   - Booting up control plane ...
	I0920 22:25:08.915080   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:25:08.915160   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:25:08.915571   64930 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:25:08.935196   64930 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:25:08.941127   64930 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:25:08.941188   64930 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:25:09.077238   64930 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:25:09.077420   64930 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:25:09.580067   64930 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.879816ms
	I0920 22:25:09.580155   64930 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:25:09.779112   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:11.779164   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:14.582062   64930 kubeadm.go:310] [api-check] The API server is healthy after 5.001748542s
	I0920 22:25:14.600094   64930 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:25:14.622724   64930 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:25:14.657734   64930 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:25:14.657985   64930 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-799783 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:25:14.673021   64930 kubeadm.go:310] [bootstrap-token] Using token: xygl58.yjxbp1o582mz9yse
	I0920 22:25:14.675015   64930 out.go:235]   - Configuring RBAC rules ...
	I0920 22:25:14.675149   64930 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:25:14.682527   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:25:14.696372   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:25:14.700087   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:25:14.703910   64930 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:25:14.707739   64930 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:25:14.991815   64930 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:25:15.451193   64930 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:25:15.990452   64930 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:25:15.990476   64930 kubeadm.go:310] 
	I0920 22:25:15.990551   64930 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:25:15.990561   64930 kubeadm.go:310] 
	I0920 22:25:15.990692   64930 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:25:15.990720   64930 kubeadm.go:310] 
	I0920 22:25:15.990765   64930 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:25:15.990873   64930 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:25:15.990983   64930 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:25:15.990998   64930 kubeadm.go:310] 
	I0920 22:25:15.991082   64930 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:25:15.991091   64930 kubeadm.go:310] 
	I0920 22:25:15.991157   64930 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:25:15.991169   64930 kubeadm.go:310] 
	I0920 22:25:15.991248   64930 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:25:15.991352   64930 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:25:15.991458   64930 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:25:15.991469   64930 kubeadm.go:310] 
	I0920 22:25:15.991587   64930 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:25:15.991695   64930 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:25:15.991705   64930 kubeadm.go:310] 
	I0920 22:25:15.991828   64930 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token xygl58.yjxbp1o582mz9yse \
	I0920 22:25:15.991969   64930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 22:25:15.991996   64930 kubeadm.go:310] 	--control-plane 
	I0920 22:25:15.992004   64930 kubeadm.go:310] 
	I0920 22:25:15.992116   64930 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:25:15.992128   64930 kubeadm.go:310] 
	I0920 22:25:15.992237   64930 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token xygl58.yjxbp1o582mz9yse \
	I0920 22:25:15.992374   64930 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 22:25:15.993135   64930 kubeadm.go:310] W0920 22:25:07.638229    2519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:25:15.993525   64930 kubeadm.go:310] W0920 22:25:07.639053    2519 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:25:15.993662   64930 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:25:15.993697   64930 cni.go:84] Creating CNI manager for ""
	I0920 22:25:15.993707   64930 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:25:15.995878   64930 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:25:13.779628   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:16.279390   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:15.997224   64930 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:25:16.009209   64930 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:25:16.056563   64930 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:25:16.056636   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:16.056650   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-799783 minikube.k8s.io/updated_at=2024_09_20T22_25_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=default-k8s-diff-port-799783 minikube.k8s.io/primary=true
	I0920 22:25:16.306668   64930 ops.go:34] apiserver oom_adj: -16
	I0920 22:25:16.306700   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:16.807170   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:17.306777   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:17.807844   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:18.307336   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:18.807397   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:19.307726   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:19.807086   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:20.306789   64930 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:25:20.440925   64930 kubeadm.go:1113] duration metric: took 4.384353815s to wait for elevateKubeSystemPrivileges
	I0920 22:25:20.440962   64930 kubeadm.go:394] duration metric: took 4m59.125193253s to StartCluster
	I0920 22:25:20.440983   64930 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:25:20.441088   64930 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:25:20.442725   64930 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:25:20.443037   64930 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.94 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:25:20.443113   64930 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:25:20.443191   64930 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443209   64930 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-799783"
	W0920 22:25:20.443217   64930 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:25:20.443243   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.443268   64930 config.go:182] Loaded profile config "default-k8s-diff-port-799783": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:25:20.443324   64930 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443344   64930 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-799783"
	I0920 22:25:20.443654   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.443690   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.443738   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.443746   64930 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-799783"
	I0920 22:25:20.443766   64930 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-799783"
	I0920 22:25:20.443776   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	W0920 22:25:20.443784   64930 addons.go:243] addon metrics-server should already be in state true
	I0920 22:25:20.443814   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.444185   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.444234   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.445296   64930 out.go:177] * Verifying Kubernetes components...
	I0920 22:25:20.447251   64930 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:25:20.460387   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36667
	I0920 22:25:20.460405   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40197
	I0920 22:25:20.460831   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.460845   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.461346   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.461369   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.461486   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.461511   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.461774   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.461877   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.462078   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.462361   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.462402   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.463069   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43779
	I0920 22:25:20.463410   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.463880   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.463900   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.465938   64930 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-799783"
	W0920 22:25:20.465969   64930 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:25:20.465998   64930 host.go:66] Checking if "default-k8s-diff-port-799783" exists ...
	I0920 22:25:20.466384   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.466422   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.467044   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.467706   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.467742   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.482478   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38879
	I0920 22:25:20.483079   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.483326   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
	I0920 22:25:20.483597   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.483621   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.483758   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.483921   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.484249   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.484269   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.484562   64930 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:25:20.484633   64930 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:25:20.484683   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.484899   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.487182   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.489324   64930 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:25:20.489534   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38485
	I0920 22:25:20.489908   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.490596   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.490619   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.490892   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.491065   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.491195   64930 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:25:20.491216   64930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:25:20.491235   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.493798   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.494931   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.495325   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.495346   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.495482   64930 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:25:20.495649   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.495816   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.495967   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.496091   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.496876   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:25:20.496899   64930 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:25:20.496919   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.500143   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.500691   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.500715   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.500900   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.501086   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.501243   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.501378   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.502515   64930 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34479
	I0920 22:25:20.502834   64930 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:25:20.503415   64930 main.go:141] libmachine: Using API Version  1
	I0920 22:25:20.503435   64930 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:25:20.503858   64930 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:25:20.504115   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetState
	I0920 22:25:20.506006   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .DriverName
	I0920 22:25:20.506217   64930 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:25:20.506234   64930 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:25:20.506253   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHHostname
	I0920 22:25:20.509565   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.510128   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:89:df", ip: ""} in network mk-default-k8s-diff-port-799783: {Iface:virbr1 ExpiryTime:2024-09-20 23:20:07 +0000 UTC Type:0 Mac:52:54:00:30:89:df Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:default-k8s-diff-port-799783 Clientid:01:52:54:00:30:89:df}
	I0920 22:25:20.510175   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | domain default-k8s-diff-port-799783 has defined IP address 192.168.39.94 and MAC address 52:54:00:30:89:df in network mk-default-k8s-diff-port-799783
	I0920 22:25:20.510373   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHPort
	I0920 22:25:20.510558   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHKeyPath
	I0920 22:25:20.510702   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .GetSSHUsername
	I0920 22:25:20.510837   64930 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/default-k8s-diff-port-799783/id_rsa Username:docker}
	I0920 22:25:20.644780   64930 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:25:20.662529   64930 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-799783" to be "Ready" ...
	I0920 22:25:20.717900   64930 node_ready.go:49] node "default-k8s-diff-port-799783" has status "Ready":"True"
	I0920 22:25:20.717930   64930 node_ready.go:38] duration metric: took 55.371957ms for node "default-k8s-diff-port-799783" to be "Ready" ...
	I0920 22:25:20.717959   64930 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:20.751849   64930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:20.793517   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:25:20.827184   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:25:20.827210   64930 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:25:20.830914   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:25:20.882548   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:25:20.882577   64930 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:25:20.942208   64930 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:25:20.942235   64930 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:25:20.998935   64930 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:25:21.567298   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567313   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567333   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567338   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567602   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.567648   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.567677   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.567694   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.567705   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.567921   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.567947   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.567952   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569272   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.569294   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569344   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.569361   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.569606   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.569641   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.569641   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.589899   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.589919   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.590252   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.590267   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.590273   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.842284   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.842313   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.842673   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.842674   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.842703   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.842716   64930 main.go:141] libmachine: Making call to close driver server
	I0920 22:25:21.842726   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) Calling .Close
	I0920 22:25:21.843052   64930 main.go:141] libmachine: (default-k8s-diff-port-799783) DBG | Closing plugin on server side
	I0920 22:25:21.843086   64930 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:25:21.843100   64930 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:25:21.843578   64930 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-799783"
	I0920 22:25:21.847202   64930 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0920 22:25:18.778824   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:20.779763   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:23.279489   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:21.849188   64930 addons.go:510] duration metric: took 1.406076182s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0920 22:25:22.761019   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:23.758480   64930 pod_ready.go:93] pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:23.758502   64930 pod_ready.go:82] duration metric: took 3.006618308s for pod "coredns-7c65d6cfc9-dpwmm" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:23.758514   64930 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:25.776826   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:27.777926   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:25.766257   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:28.265538   64930 pod_ready.go:103] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:29.784175   64930 pod_ready.go:93] pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.784202   64930 pod_ready.go:82] duration metric: took 6.02568215s for pod "coredns-7c65d6cfc9-lgvzf" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.784213   64930 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.790351   64930 pod_ready.go:93] pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.790375   64930 pod_ready.go:82] duration metric: took 6.155652ms for pod "etcd-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.790383   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.796400   64930 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.796423   64930 pod_ready.go:82] duration metric: took 6.034163ms for pod "kube-apiserver-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.796432   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.801470   64930 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.801495   64930 pod_ready.go:82] duration metric: took 5.056014ms for pod "kube-controller-manager-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.801504   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h7j48" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.806664   64930 pod_ready.go:93] pod "kube-proxy-h7j48" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:29.806683   64930 pod_ready.go:82] duration metric: took 5.17339ms for pod "kube-proxy-h7j48" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:29.806691   64930 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:30.163945   64930 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace has status "Ready":"True"
	I0920 22:25:30.163973   64930 pod_ready.go:82] duration metric: took 357.276064ms for pod "kube-scheduler-default-k8s-diff-port-799783" in "kube-system" namespace to be "Ready" ...
	I0920 22:25:30.163981   64930 pod_ready.go:39] duration metric: took 9.446007204s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:30.163995   64930 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:25:30.164067   64930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:25:30.180030   64930 api_server.go:72] duration metric: took 9.736951732s to wait for apiserver process to appear ...
	I0920 22:25:30.180063   64930 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:25:30.180084   64930 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8444/healthz ...
	I0920 22:25:30.184582   64930 api_server.go:279] https://192.168.39.94:8444/healthz returned 200:
	ok
	I0920 22:25:30.185462   64930 api_server.go:141] control plane version: v1.31.1
	I0920 22:25:30.185482   64930 api_server.go:131] duration metric: took 5.41308ms to wait for apiserver health ...
	I0920 22:25:30.185490   64930 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:25:30.254069   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:25:30.255250   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:30.255483   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:30.366383   64930 system_pods.go:59] 9 kube-system pods found
	I0920 22:25:30.366415   64930 system_pods.go:61] "coredns-7c65d6cfc9-dpwmm" [6bd8b8dc-b176-4d2a-affc-63e02eb9fba3] Running
	I0920 22:25:30.366421   64930 system_pods.go:61] "coredns-7c65d6cfc9-lgvzf" [58852f70-36e8-4972-8a6f-b04d8fc9d9f3] Running
	I0920 22:25:30.366425   64930 system_pods.go:61] "etcd-default-k8s-diff-port-799783" [50316bfb-5f45-4685-a331-896f9ecbf691] Running
	I0920 22:25:30.366429   64930 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-799783" [f5148101-179c-45e3-a7fc-b4b8a95c8b13] Running
	I0920 22:25:30.366433   64930 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-799783" [c4c595bb-40bd-4ad6-b121-43f539fc9d76] Running
	I0920 22:25:30.366436   64930 system_pods.go:61] "kube-proxy-h7j48" [0fe6b074-a959-4f74-a981-e2303126c3ab] Running
	I0920 22:25:30.366439   64930 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-799783" [369b18a4-6c12-403a-889d-4365264a29bf] Running
	I0920 22:25:30.366444   64930 system_pods.go:61] "metrics-server-6867b74b74-pjcvw" [0b1b708c-664e-4d6c-8da2-79941793d8d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:25:30.366448   64930 system_pods.go:61] "storage-provisioner" [6149617c-dd2c-4ac4-ad1b-80f4943d966f] Running
	I0920 22:25:30.366455   64930 system_pods.go:74] duration metric: took 180.959929ms to wait for pod list to return data ...
	I0920 22:25:30.366462   64930 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:25:30.563810   64930 default_sa.go:45] found service account: "default"
	I0920 22:25:30.563838   64930 default_sa.go:55] duration metric: took 197.369932ms for default service account to be created ...
	I0920 22:25:30.563848   64930 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:25:30.767257   64930 system_pods.go:86] 9 kube-system pods found
	I0920 22:25:30.767285   64930 system_pods.go:89] "coredns-7c65d6cfc9-dpwmm" [6bd8b8dc-b176-4d2a-affc-63e02eb9fba3] Running
	I0920 22:25:30.767291   64930 system_pods.go:89] "coredns-7c65d6cfc9-lgvzf" [58852f70-36e8-4972-8a6f-b04d8fc9d9f3] Running
	I0920 22:25:30.767295   64930 system_pods.go:89] "etcd-default-k8s-diff-port-799783" [50316bfb-5f45-4685-a331-896f9ecbf691] Running
	I0920 22:25:30.767309   64930 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-799783" [f5148101-179c-45e3-a7fc-b4b8a95c8b13] Running
	I0920 22:25:30.767313   64930 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-799783" [c4c595bb-40bd-4ad6-b121-43f539fc9d76] Running
	I0920 22:25:30.767318   64930 system_pods.go:89] "kube-proxy-h7j48" [0fe6b074-a959-4f74-a981-e2303126c3ab] Running
	I0920 22:25:30.767323   64930 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-799783" [369b18a4-6c12-403a-889d-4365264a29bf] Running
	I0920 22:25:30.767331   64930 system_pods.go:89] "metrics-server-6867b74b74-pjcvw" [0b1b708c-664e-4d6c-8da2-79941793d8d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:25:30.767337   64930 system_pods.go:89] "storage-provisioner" [6149617c-dd2c-4ac4-ad1b-80f4943d966f] Running
	I0920 22:25:30.767348   64930 system_pods.go:126] duration metric: took 203.493638ms to wait for k8s-apps to be running ...
	I0920 22:25:30.767362   64930 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:25:30.767405   64930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:30.784553   64930 system_svc.go:56] duration metric: took 17.183184ms WaitForService to wait for kubelet
	I0920 22:25:30.784583   64930 kubeadm.go:582] duration metric: took 10.341510649s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:25:30.784612   64930 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:25:30.963140   64930 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:25:30.963166   64930 node_conditions.go:123] node cpu capacity is 2
	I0920 22:25:30.963177   64930 node_conditions.go:105] duration metric: took 178.56013ms to run NodePressure ...
	I0920 22:25:30.963187   64930 start.go:241] waiting for startup goroutines ...
	I0920 22:25:30.963194   64930 start.go:246] waiting for cluster config update ...
	I0920 22:25:30.963204   64930 start.go:255] writing updated cluster config ...
	I0920 22:25:30.963504   64930 ssh_runner.go:195] Run: rm -f paused
	I0920 22:25:31.013117   64930 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:25:31.015183   64930 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-799783" cluster and "default" namespace by default
	I0920 22:25:29.779849   64076 pod_ready.go:103] pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace has status "Ready":"False"
	I0920 22:25:31.772018   64076 pod_ready.go:82] duration metric: took 4m0.000779939s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" ...
	E0920 22:25:31.772048   64076 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-x9qjw" in "kube-system" namespace to be "Ready" (will not retry!)
	I0920 22:25:31.772065   64076 pod_ready.go:39] duration metric: took 4m8.986742832s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:25:31.772089   64076 kubeadm.go:597] duration metric: took 4m16.808660952s to restartPrimaryControlPlane
	W0920 22:25:31.772139   64076 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0920 22:25:31.772162   64076 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:25:35.255954   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:35.256195   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:45.256746   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:25:45.257006   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:25:58.118910   64076 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.346718333s)
	I0920 22:25:58.118998   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:25:58.138485   64076 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 22:25:58.150312   64076 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:25:58.163089   64076 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:25:58.163114   64076 kubeadm.go:157] found existing configuration files:
	
	I0920 22:25:58.163165   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:25:58.173792   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:25:58.173847   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:25:58.185186   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:25:58.195835   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:25:58.195893   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:25:58.206121   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:25:58.216016   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:25:58.216076   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:25:58.226242   64076 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:25:58.235841   64076 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:25:58.235907   64076 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:25:58.245684   64076 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:25:58.295781   64076 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 22:25:58.295913   64076 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:25:58.411688   64076 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:25:58.411802   64076 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:25:58.411912   64076 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 22:25:58.421924   64076 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:25:58.424166   64076 out.go:235]   - Generating certificates and keys ...
	I0920 22:25:58.424264   64076 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:25:58.424355   64076 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:25:58.424474   64076 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:25:58.424562   64076 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:25:58.424667   64076 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:25:58.424735   64076 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:25:58.424804   64076 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:25:58.424874   64076 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:25:58.424955   64076 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:25:58.425072   64076 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:25:58.425145   64076 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:25:58.425227   64076 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:25:58.541762   64076 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:25:58.784016   64076 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 22:25:58.942020   64076 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:25:59.024750   64076 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:25:59.348415   64076 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:25:59.348852   64076 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:25:59.351325   64076 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:25:59.353421   64076 out.go:235]   - Booting up control plane ...
	I0920 22:25:59.353551   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:25:59.353681   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:25:59.354049   64076 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:25:59.373482   64076 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:25:59.383527   64076 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:25:59.383614   64076 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:25:59.518812   64076 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 22:25:59.519013   64076 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 22:26:00.520303   64076 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001563044s
	I0920 22:26:00.520408   64076 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 22:26:05.522834   64076 kubeadm.go:310] [api-check] The API server is healthy after 5.002467309s
	I0920 22:26:05.536106   64076 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 22:26:05.552715   64076 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 22:26:05.578398   64076 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 22:26:05.578647   64076 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-032884 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 22:26:05.590816   64076 kubeadm.go:310] [bootstrap-token] Using token: 3j77z9.7tnukhkmyvtf2f59
	I0920 22:26:05.258121   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:05.258367   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:05.592300   64076 out.go:235]   - Configuring RBAC rules ...
	I0920 22:26:05.592451   64076 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 22:26:05.600458   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 22:26:05.608494   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 22:26:05.612079   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 22:26:05.617101   64076 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 22:26:05.627400   64076 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 22:26:05.930323   64076 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 22:26:06.388375   64076 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 22:26:06.934433   64076 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 22:26:06.934459   64076 kubeadm.go:310] 
	I0920 22:26:06.934541   64076 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 22:26:06.934552   64076 kubeadm.go:310] 
	I0920 22:26:06.934644   64076 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 22:26:06.934655   64076 kubeadm.go:310] 
	I0920 22:26:06.934694   64076 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 22:26:06.934824   64076 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 22:26:06.934916   64076 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 22:26:06.934950   64076 kubeadm.go:310] 
	I0920 22:26:06.935050   64076 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 22:26:06.935067   64076 kubeadm.go:310] 
	I0920 22:26:06.935125   64076 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 22:26:06.935136   64076 kubeadm.go:310] 
	I0920 22:26:06.935222   64076 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 22:26:06.935325   64076 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 22:26:06.935429   64076 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 22:26:06.935438   64076 kubeadm.go:310] 
	I0920 22:26:06.935567   64076 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 22:26:06.935719   64076 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 22:26:06.935734   64076 kubeadm.go:310] 
	I0920 22:26:06.935841   64076 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3j77z9.7tnukhkmyvtf2f59 \
	I0920 22:26:06.935978   64076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e \
	I0920 22:26:06.936005   64076 kubeadm.go:310] 	--control-plane 
	I0920 22:26:06.936013   64076 kubeadm.go:310] 
	I0920 22:26:06.936116   64076 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 22:26:06.936130   64076 kubeadm.go:310] 
	I0920 22:26:06.936241   64076 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3j77z9.7tnukhkmyvtf2f59 \
	I0920 22:26:06.936376   64076 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5973afb509b61c6ef88f31fa7f2a4a6e291ae4a4c9a129e21e0796fe21bc228e 
	I0920 22:26:06.937407   64076 kubeadm.go:310] W0920 22:25:58.249567    2984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:26:06.937837   64076 kubeadm.go:310] W0920 22:25:58.250340    2984 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 22:26:06.937982   64076 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:26:06.938009   64076 cni.go:84] Creating CNI manager for ""
	I0920 22:26:06.938021   64076 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 22:26:06.939944   64076 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 22:26:06.941272   64076 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 22:26:06.952238   64076 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 22:26:06.973391   64076 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 22:26:06.973493   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:06.973530   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-032884 minikube.k8s.io/updated_at=2024_09_20T22_26_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=no-preload-032884 minikube.k8s.io/primary=true
	I0920 22:26:07.020571   64076 ops.go:34] apiserver oom_adj: -16
	I0920 22:26:07.194238   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:07.694543   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:08.194516   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:08.694927   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:09.195240   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:09.694399   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.194851   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.695301   64076 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 22:26:10.816261   64076 kubeadm.go:1113] duration metric: took 3.842834289s to wait for elevateKubeSystemPrivileges
	I0920 22:26:10.816307   64076 kubeadm.go:394] duration metric: took 4m55.906657261s to StartCluster
	I0920 22:26:10.816334   64076 settings.go:142] acquiring lock: {Name:mk018aef600cad77a705518ce461800515306ed6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:26:10.816423   64076 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:26:10.818137   64076 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19672-9614/kubeconfig: {Name:mk39d690a1e3767ee7bc0cf906cdad210ceb970b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 22:26:10.818430   64076 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.19 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0920 22:26:10.818544   64076 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 22:26:10.818635   64076 config.go:182] Loaded profile config "no-preload-032884": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:26:10.818650   64076 addons.go:69] Setting storage-provisioner=true in profile "no-preload-032884"
	I0920 22:26:10.818673   64076 addons.go:234] Setting addon storage-provisioner=true in "no-preload-032884"
	I0920 22:26:10.818683   64076 addons.go:69] Setting default-storageclass=true in profile "no-preload-032884"
	W0920 22:26:10.818695   64076 addons.go:243] addon storage-provisioner should already be in state true
	I0920 22:26:10.818695   64076 addons.go:69] Setting metrics-server=true in profile "no-preload-032884"
	I0920 22:26:10.818709   64076 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-032884"
	I0920 22:26:10.818731   64076 addons.go:234] Setting addon metrics-server=true in "no-preload-032884"
	I0920 22:26:10.818739   64076 host.go:66] Checking if "no-preload-032884" exists ...
	W0920 22:26:10.818744   64076 addons.go:243] addon metrics-server should already be in state true
	I0920 22:26:10.818769   64076 host.go:66] Checking if "no-preload-032884" exists ...
	I0920 22:26:10.819164   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819180   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819179   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.819213   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.819222   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.819331   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.820951   64076 out.go:177] * Verifying Kubernetes components...
	I0920 22:26:10.822713   64076 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 22:26:10.836801   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38653
	I0920 22:26:10.837338   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.837962   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.837987   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.838418   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.839070   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.839116   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.840425   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45707
	I0920 22:26:10.840549   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34443
	I0920 22:26:10.840871   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.840930   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.841366   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.841392   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.841486   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.841510   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.841775   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.841840   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.841983   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.842400   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.842439   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.845507   64076 addons.go:234] Setting addon default-storageclass=true in "no-preload-032884"
	W0920 22:26:10.845528   64076 addons.go:243] addon default-storageclass should already be in state true
	I0920 22:26:10.845554   64076 host.go:66] Checking if "no-preload-032884" exists ...
	I0920 22:26:10.845888   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.845930   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.856667   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44155
	I0920 22:26:10.857183   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.857701   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.857717   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.858050   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.858233   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.858557   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38281
	I0920 22:26:10.858951   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.859570   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.859590   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.859931   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.860098   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.860214   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.861837   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.862616   64076 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 22:26:10.863540   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I0920 22:26:10.863601   64076 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 22:26:10.863918   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.864542   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.864562   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.864714   64076 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:26:10.864732   64076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 22:26:10.864750   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.864852   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.865319   64076 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 22:26:10.865348   64076 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 22:26:10.865572   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 22:26:10.865590   64076 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 22:26:10.865611   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.868063   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868471   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.868493   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868500   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.868687   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.868968   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.868990   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.869012   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.869160   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.869216   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.869298   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:10.869356   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.869480   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.869584   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:10.908239   64076 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I0920 22:26:10.910305   64076 main.go:141] libmachine: () Calling .GetVersion
	I0920 22:26:10.910864   64076 main.go:141] libmachine: Using API Version  1
	I0920 22:26:10.910889   64076 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 22:26:10.911286   64076 main.go:141] libmachine: () Calling .GetMachineName
	I0920 22:26:10.911498   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetState
	I0920 22:26:10.913134   64076 main.go:141] libmachine: (no-preload-032884) Calling .DriverName
	I0920 22:26:10.913353   64076 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 22:26:10.913369   64076 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 22:26:10.913385   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHHostname
	I0920 22:26:10.916151   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.916720   64076 main.go:141] libmachine: (no-preload-032884) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a6:6f:3c", ip: ""} in network mk-no-preload-032884: {Iface:virbr4 ExpiryTime:2024-09-20 23:20:47 +0000 UTC Type:0 Mac:52:54:00:a6:6f:3c Iaid: IPaddr:192.168.72.19 Prefix:24 Hostname:no-preload-032884 Clientid:01:52:54:00:a6:6f:3c}
	I0920 22:26:10.916751   64076 main.go:141] libmachine: (no-preload-032884) DBG | domain no-preload-032884 has defined IP address 192.168.72.19 and MAC address 52:54:00:a6:6f:3c in network mk-no-preload-032884
	I0920 22:26:10.916974   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHPort
	I0920 22:26:10.917122   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHKeyPath
	I0920 22:26:10.917235   64076 main.go:141] libmachine: (no-preload-032884) Calling .GetSSHUsername
	I0920 22:26:10.917322   64076 sshutil.go:53] new ssh client: &{IP:192.168.72.19 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/no-preload-032884/id_rsa Username:docker}
	I0920 22:26:11.062110   64076 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 22:26:11.091616   64076 node_ready.go:35] waiting up to 6m0s for node "no-preload-032884" to be "Ready" ...
	I0920 22:26:11.109422   64076 node_ready.go:49] node "no-preload-032884" has status "Ready":"True"
	I0920 22:26:11.109446   64076 node_ready.go:38] duration metric: took 17.79737ms for node "no-preload-032884" to be "Ready" ...
	I0920 22:26:11.109455   64076 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:26:11.117996   64076 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:11.162435   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 22:26:11.251093   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 22:26:11.251120   64076 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 22:26:11.267736   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 22:26:11.300802   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 22:26:11.300828   64076 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 22:26:11.417756   64076 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:26:11.417784   64076 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 22:26:11.467144   64076 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 22:26:11.490750   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.490776   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.491084   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.491093   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:11.491103   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.491127   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.491142   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.492893   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:11.492899   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.492913   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.526730   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:11.526758   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:11.527068   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:11.527113   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:11.527142   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.029394   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.029419   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.029728   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.029734   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.029753   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.029780   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.029788   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.030049   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.030060   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.725440   64076 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.258247375s)
	I0920 22:26:12.725497   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.725509   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.725835   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.725896   64076 main.go:141] libmachine: (no-preload-032884) DBG | Closing plugin on server side
	I0920 22:26:12.725909   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.725929   64076 main.go:141] libmachine: Making call to close driver server
	I0920 22:26:12.725937   64076 main.go:141] libmachine: (no-preload-032884) Calling .Close
	I0920 22:26:12.726239   64076 main.go:141] libmachine: Successfully made call to close driver server
	I0920 22:26:12.726256   64076 main.go:141] libmachine: Making call to close connection to plugin binary
	I0920 22:26:12.726268   64076 addons.go:475] Verifying addon metrics-server=true in "no-preload-032884"
	I0920 22:26:12.728264   64076 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0920 22:26:12.729421   64076 addons.go:510] duration metric: took 1.910882335s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0920 22:26:13.131514   64076 pod_ready.go:103] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:15.624357   64076 pod_ready.go:103] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:16.624382   64076 pod_ready.go:93] pod "etcd-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:16.624410   64076 pod_ready.go:82] duration metric: took 5.506388589s for pod "etcd-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:16.624419   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:18.631927   64076 pod_ready.go:103] pod "kube-apiserver-no-preload-032884" in "kube-system" namespace has status "Ready":"False"
	I0920 22:26:19.132302   64076 pod_ready.go:93] pod "kube-apiserver-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.132326   64076 pod_ready.go:82] duration metric: took 2.507900074s for pod "kube-apiserver-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.132336   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.137816   64076 pod_ready.go:93] pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.137841   64076 pod_ready.go:82] duration metric: took 5.497911ms for pod "kube-controller-manager-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.137853   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fdc7z" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.143195   64076 pod_ready.go:93] pod "kube-proxy-fdc7z" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.143225   64076 pod_ready.go:82] duration metric: took 5.363183ms for pod "kube-proxy-fdc7z" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.143236   64076 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.148171   64076 pod_ready.go:93] pod "kube-scheduler-no-preload-032884" in "kube-system" namespace has status "Ready":"True"
	I0920 22:26:19.148193   64076 pod_ready.go:82] duration metric: took 4.948914ms for pod "kube-scheduler-no-preload-032884" in "kube-system" namespace to be "Ready" ...
	I0920 22:26:19.148200   64076 pod_ready.go:39] duration metric: took 8.038734304s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 22:26:19.148214   64076 api_server.go:52] waiting for apiserver process to appear ...
	I0920 22:26:19.148259   64076 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 22:26:19.167764   64076 api_server.go:72] duration metric: took 8.349299959s to wait for apiserver process to appear ...
	I0920 22:26:19.167789   64076 api_server.go:88] waiting for apiserver healthz status ...
	I0920 22:26:19.167809   64076 api_server.go:253] Checking apiserver healthz at https://192.168.72.19:8443/healthz ...
	I0920 22:26:19.173104   64076 api_server.go:279] https://192.168.72.19:8443/healthz returned 200:
	ok
	I0920 22:26:19.174150   64076 api_server.go:141] control plane version: v1.31.1
	I0920 22:26:19.174171   64076 api_server.go:131] duration metric: took 6.376066ms to wait for apiserver health ...
	I0920 22:26:19.174178   64076 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 22:26:19.181050   64076 system_pods.go:59] 9 kube-system pods found
	I0920 22:26:19.181083   64076 system_pods.go:61] "coredns-7c65d6cfc9-fbxtz" [30e7b041-7437-4dec-bea2-4512c1c233d8] Running
	I0920 22:26:19.181091   64076 system_pods.go:61] "coredns-7c65d6cfc9-jsx2x" [6b4bdbac-ba8e-434b-80b7-6ade0c7cc363] Running
	I0920 22:26:19.181097   64076 system_pods.go:61] "etcd-no-preload-032884" [dd023e30-352b-4578-85b8-27fed6b64110] Running
	I0920 22:26:19.181102   64076 system_pods.go:61] "kube-apiserver-no-preload-032884" [5a644708-3886-4534-b936-b4931e804a04] Running
	I0920 22:26:19.181106   64076 system_pods.go:61] "kube-controller-manager-no-preload-032884" [6c871a06-842c-4588-a16f-2ceb8eef6565] Running
	I0920 22:26:19.181113   64076 system_pods.go:61] "kube-proxy-fdc7z" [7a28610e-aa20-4aed-a9a4-1e125f03445b] Running
	I0920 22:26:19.181117   64076 system_pods.go:61] "kube-scheduler-no-preload-032884" [6d878841-99c2-4b0a-95fa-01fbc185e78c] Running
	I0920 22:26:19.181126   64076 system_pods.go:61] "metrics-server-6867b74b74-xvbg9" [33c2a642-1336-4e1c-98cf-a1941710c39f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:26:19.181131   64076 system_pods.go:61] "storage-provisioner" [18366542-5571-4ff9-885b-f3aaa3106905] Running
	I0920 22:26:19.181143   64076 system_pods.go:74] duration metric: took 6.958801ms to wait for pod list to return data ...
	I0920 22:26:19.181152   64076 default_sa.go:34] waiting for default service account to be created ...
	I0920 22:26:19.185152   64076 default_sa.go:45] found service account: "default"
	I0920 22:26:19.185178   64076 default_sa.go:55] duration metric: took 4.016584ms for default service account to be created ...
	I0920 22:26:19.185188   64076 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 22:26:19.333687   64076 system_pods.go:86] 9 kube-system pods found
	I0920 22:26:19.333721   64076 system_pods.go:89] "coredns-7c65d6cfc9-fbxtz" [30e7b041-7437-4dec-bea2-4512c1c233d8] Running
	I0920 22:26:19.333729   64076 system_pods.go:89] "coredns-7c65d6cfc9-jsx2x" [6b4bdbac-ba8e-434b-80b7-6ade0c7cc363] Running
	I0920 22:26:19.333734   64076 system_pods.go:89] "etcd-no-preload-032884" [dd023e30-352b-4578-85b8-27fed6b64110] Running
	I0920 22:26:19.333740   64076 system_pods.go:89] "kube-apiserver-no-preload-032884" [5a644708-3886-4534-b936-b4931e804a04] Running
	I0920 22:26:19.333748   64076 system_pods.go:89] "kube-controller-manager-no-preload-032884" [6c871a06-842c-4588-a16f-2ceb8eef6565] Running
	I0920 22:26:19.333753   64076 system_pods.go:89] "kube-proxy-fdc7z" [7a28610e-aa20-4aed-a9a4-1e125f03445b] Running
	I0920 22:26:19.333759   64076 system_pods.go:89] "kube-scheduler-no-preload-032884" [6d878841-99c2-4b0a-95fa-01fbc185e78c] Running
	I0920 22:26:19.333768   64076 system_pods.go:89] "metrics-server-6867b74b74-xvbg9" [33c2a642-1336-4e1c-98cf-a1941710c39f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 22:26:19.333778   64076 system_pods.go:89] "storage-provisioner" [18366542-5571-4ff9-885b-f3aaa3106905] Running
	I0920 22:26:19.333792   64076 system_pods.go:126] duration metric: took 148.596279ms to wait for k8s-apps to be running ...
	I0920 22:26:19.333804   64076 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 22:26:19.333859   64076 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:26:19.352068   64076 system_svc.go:56] duration metric: took 18.255575ms WaitForService to wait for kubelet
	I0920 22:26:19.352101   64076 kubeadm.go:582] duration metric: took 8.533641688s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 22:26:19.352120   64076 node_conditions.go:102] verifying NodePressure condition ...
	I0920 22:26:19.529468   64076 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0920 22:26:19.529499   64076 node_conditions.go:123] node cpu capacity is 2
	I0920 22:26:19.529510   64076 node_conditions.go:105] duration metric: took 177.384986ms to run NodePressure ...
	I0920 22:26:19.529521   64076 start.go:241] waiting for startup goroutines ...
	I0920 22:26:19.529527   64076 start.go:246] waiting for cluster config update ...
	I0920 22:26:19.529536   64076 start.go:255] writing updated cluster config ...
	I0920 22:26:19.529794   64076 ssh_runner.go:195] Run: rm -f paused
	I0920 22:26:19.578797   64076 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 22:26:19.581150   64076 out.go:177] * Done! kubectl is now configured to use "no-preload-032884" cluster and "default" namespace by default
	I0920 22:26:45.260713   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:26:45.260972   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:26:45.260987   65164 kubeadm.go:310] 
	I0920 22:26:45.261036   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:26:45.261079   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:26:45.261089   65164 kubeadm.go:310] 
	I0920 22:26:45.261139   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:26:45.261186   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:26:45.261335   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:26:45.261345   65164 kubeadm.go:310] 
	I0920 22:26:45.261491   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:26:45.261525   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:26:45.261554   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:26:45.261560   65164 kubeadm.go:310] 
	I0920 22:26:45.261648   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:26:45.261717   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:26:45.261723   65164 kubeadm.go:310] 
	I0920 22:26:45.261812   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:26:45.261917   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:26:45.262021   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:26:45.262113   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:26:45.262126   65164 kubeadm.go:310] 
	I0920 22:26:45.263170   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:26:45.263288   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:26:45.263381   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0920 22:26:45.263497   65164 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0920 22:26:45.263544   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0920 22:26:50.654066   65164 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.390496163s)
	I0920 22:26:50.654143   65164 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 22:26:50.670366   65164 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 22:26:50.682320   65164 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 22:26:50.682340   65164 kubeadm.go:157] found existing configuration files:
	
	I0920 22:26:50.682395   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 22:26:50.692427   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 22:26:50.692501   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 22:26:50.703008   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 22:26:50.713027   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 22:26:50.713090   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 22:26:50.724408   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.733746   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 22:26:50.733805   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 22:26:50.744669   65164 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 22:26:50.753996   65164 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 22:26:50.754061   65164 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 22:26:50.764484   65164 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0920 22:26:50.836541   65164 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0920 22:26:50.836606   65164 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 22:26:50.979844   65164 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 22:26:50.979987   65164 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 22:26:50.980117   65164 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0920 22:26:51.172382   65164 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 22:26:51.174186   65164 out.go:235]   - Generating certificates and keys ...
	I0920 22:26:51.174313   65164 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 22:26:51.174410   65164 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 22:26:51.174550   65164 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0920 22:26:51.174640   65164 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0920 22:26:51.174768   65164 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0920 22:26:51.174855   65164 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0920 22:26:51.175252   65164 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0920 22:26:51.175637   65164 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0920 22:26:51.176082   65164 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0920 22:26:51.176400   65164 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0920 22:26:51.176484   65164 kubeadm.go:310] [certs] Using the existing "sa" key
	I0920 22:26:51.176562   65164 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 22:26:51.391581   65164 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 22:26:51.474902   65164 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 22:26:51.620828   65164 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 22:26:51.838880   65164 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 22:26:51.856375   65164 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 22:26:51.857522   65164 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 22:26:51.857615   65164 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 22:26:52.014694   65164 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 22:26:52.016872   65164 out.go:235]   - Booting up control plane ...
	I0920 22:26:52.017002   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 22:26:52.019915   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 22:26:52.021285   65164 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 22:26:52.022532   65164 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 22:26:52.026230   65164 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0920 22:27:32.029658   65164 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0920 22:27:32.029764   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:32.030051   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:37.030416   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:37.030634   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:27:47.031397   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:27:47.031644   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:07.031668   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:07.031918   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030492   65164 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0920 22:28:47.030797   65164 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0920 22:28:47.030821   65164 kubeadm.go:310] 
	I0920 22:28:47.030874   65164 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0920 22:28:47.030912   65164 kubeadm.go:310] 		timed out waiting for the condition
	I0920 22:28:47.030919   65164 kubeadm.go:310] 
	I0920 22:28:47.030982   65164 kubeadm.go:310] 	This error is likely caused by:
	I0920 22:28:47.031046   65164 kubeadm.go:310] 		- The kubelet is not running
	I0920 22:28:47.031204   65164 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0920 22:28:47.031225   65164 kubeadm.go:310] 
	I0920 22:28:47.031371   65164 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0920 22:28:47.031425   65164 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0920 22:28:47.031462   65164 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0920 22:28:47.031466   65164 kubeadm.go:310] 
	I0920 22:28:47.031605   65164 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0920 22:28:47.031727   65164 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0920 22:28:47.031739   65164 kubeadm.go:310] 
	I0920 22:28:47.031880   65164 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0920 22:28:47.032003   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0920 22:28:47.032102   65164 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0920 22:28:47.032202   65164 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0920 22:28:47.032212   65164 kubeadm.go:310] 
	I0920 22:28:47.033045   65164 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 22:28:47.033157   65164 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0920 22:28:47.033247   65164 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0920 22:28:47.033315   65164 kubeadm.go:394] duration metric: took 8m3.299765846s to StartCluster
	I0920 22:28:47.033360   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0920 22:28:47.033452   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 22:28:47.081201   65164 cri.go:89] found id: ""
	I0920 22:28:47.081225   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.081233   65164 logs.go:278] No container was found matching "kube-apiserver"
	I0920 22:28:47.081239   65164 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0920 22:28:47.081289   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 22:28:47.119048   65164 cri.go:89] found id: ""
	I0920 22:28:47.119081   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.119089   65164 logs.go:278] No container was found matching "etcd"
	I0920 22:28:47.119094   65164 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0920 22:28:47.119140   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 22:28:47.164690   65164 cri.go:89] found id: ""
	I0920 22:28:47.164719   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.164730   65164 logs.go:278] No container was found matching "coredns"
	I0920 22:28:47.164738   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0920 22:28:47.164806   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 22:28:47.201003   65164 cri.go:89] found id: ""
	I0920 22:28:47.201035   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.201046   65164 logs.go:278] No container was found matching "kube-scheduler"
	I0920 22:28:47.201053   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0920 22:28:47.201114   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 22:28:47.238478   65164 cri.go:89] found id: ""
	I0920 22:28:47.238510   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.238522   65164 logs.go:278] No container was found matching "kube-proxy"
	I0920 22:28:47.238530   65164 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 22:28:47.238579   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 22:28:47.274372   65164 cri.go:89] found id: ""
	I0920 22:28:47.274406   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.274415   65164 logs.go:278] No container was found matching "kube-controller-manager"
	I0920 22:28:47.274422   65164 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0920 22:28:47.274487   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 22:28:47.310155   65164 cri.go:89] found id: ""
	I0920 22:28:47.310181   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.310191   65164 logs.go:278] No container was found matching "kindnet"
	I0920 22:28:47.310197   65164 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 22:28:47.310243   65164 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 22:28:47.350845   65164 cri.go:89] found id: ""
	I0920 22:28:47.350872   65164 logs.go:276] 0 containers: []
	W0920 22:28:47.350882   65164 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0920 22:28:47.350894   65164 logs.go:123] Gathering logs for kubelet ...
	I0920 22:28:47.350908   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0920 22:28:47.403879   65164 logs.go:123] Gathering logs for dmesg ...
	I0920 22:28:47.403915   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 22:28:47.418989   65164 logs.go:123] Gathering logs for describe nodes ...
	I0920 22:28:47.419021   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0920 22:28:47.511632   65164 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0920 22:28:47.511697   65164 logs.go:123] Gathering logs for CRI-O ...
	I0920 22:28:47.511718   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0920 22:28:47.635613   65164 logs.go:123] Gathering logs for container status ...
	I0920 22:28:47.635653   65164 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0920 22:28:47.677326   65164 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0920 22:28:47.677383   65164 out.go:270] * 
	W0920 22:28:47.677436   65164 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.677449   65164 out.go:270] * 
	W0920 22:28:47.678365   65164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 22:28:47.681590   65164 out.go:201] 
	W0920 22:28:47.682658   65164 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0920 22:28:47.682696   65164 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0920 22:28:47.682719   65164 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0920 22:28:47.684042   65164 out.go:201] 
	
	
	==> CRI-O <==
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.281372352Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872033281345119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=709fe575-2eec-44b0-a457-9f53e5f00185 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.282030327Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=064aeeb7-dce9-43be-b3eb-62b5293d822e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.282089669Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=064aeeb7-dce9-43be-b3eb-62b5293d822e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.282118789Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=064aeeb7-dce9-43be-b3eb-62b5293d822e name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.315462472Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=27abb7d9-dbad-4c17-b4f9-73f49e4e339c name=/runtime.v1.RuntimeService/Version
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.315538100Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=27abb7d9-dbad-4c17-b4f9-73f49e4e339c name=/runtime.v1.RuntimeService/Version
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.316530223Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1cf73ffb-3eb4-4c3b-9a26-d577eed0ffdf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.316936611Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872033316894790,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1cf73ffb-3eb4-4c3b-9a26-d577eed0ffdf name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.317465790Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bdc19d1b-6355-4411-b2f2-075f8cabf244 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.317517810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bdc19d1b-6355-4411-b2f2-075f8cabf244 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.317549768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=bdc19d1b-6355-4411-b2f2-075f8cabf244 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.351506403Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=481530c4-f5c8-41a2-af57-a4690e9a33df name=/runtime.v1.RuntimeService/Version
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.351583280Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=481530c4-f5c8-41a2-af57-a4690e9a33df name=/runtime.v1.RuntimeService/Version
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.352636005Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb6258b0-e5c0-481e-9c8e-d4c0502474fc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.353039460Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872033353016503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb6258b0-e5c0-481e-9c8e-d4c0502474fc name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.353725178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbc29f52-3514-450b-8f27-513d02a78e5c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.353800155Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbc29f52-3514-450b-8f27-513d02a78e5c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.353881592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cbc29f52-3514-450b-8f27-513d02a78e5c name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.385972709Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93631066-5276-48f8-8673-6ac72d104202 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.386059673Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93631066-5276-48f8-8673-6ac72d104202 name=/runtime.v1.RuntimeService/Version
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.387152491Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24df465a-f6ce-40af-b56b-8c9488a83a40 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.387597312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1726872033387570216,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24df465a-f6ce-40af-b56b-8c9488a83a40 name=/runtime.v1.ImageService/ImageFsInfo
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.388167873Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6f577a4-f88f-47a1-ba59-29924fa7d440 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.388246789Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6f577a4-f88f-47a1-ba59-29924fa7d440 name=/runtime.v1.RuntimeService/ListContainers
	Sep 20 22:40:33 old-k8s-version-593412 crio[635]: time="2024-09-20 22:40:33.388287647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c6f577a4-f88f-47a1-ba59-29924fa7d440 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Sep20 22:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053738] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040959] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.115225] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.678902] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.608738] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.088274] systemd-fstab-generator[563]: Ignoring "noauto" option for root device
	[  +0.062764] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064033] systemd-fstab-generator[575]: Ignoring "noauto" option for root device
	[  +0.206014] systemd-fstab-generator[589]: Ignoring "noauto" option for root device
	[  +0.138069] systemd-fstab-generator[601]: Ignoring "noauto" option for root device
	[  +0.284835] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +6.732414] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.070990] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.334399] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[ +12.408297] kauditd_printk_skb: 46 callbacks suppressed
	[Sep20 22:24] systemd-fstab-generator[5074]: Ignoring "noauto" option for root device
	[Sep20 22:26] systemd-fstab-generator[5359]: Ignoring "noauto" option for root device
	[  +0.072028] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:40:33 up 20 min,  0 users,  load average: 0.00, 0.02, 0.03
	Linux old-k8s-version-593412 5.10.207 #1 SMP Fri Sep 20 03:13:51 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000d8540, 0xc000d82630, 0x1, 0x0, 0x0)
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc000877180)
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]: goroutine 158 [select]:
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0000519f0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000303e60, 0x0, 0x0)
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc000877180)
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Sep 20 22:40:29 old-k8s-version-593412 kubelet[6870]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Sep 20 22:40:29 old-k8s-version-593412 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Sep 20 22:40:29 old-k8s-version-593412 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 20 22:40:29 old-k8s-version-593412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 142.
	Sep 20 22:40:29 old-k8s-version-593412 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 20 22:40:29 old-k8s-version-593412 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 20 22:40:30 old-k8s-version-593412 kubelet[6879]: I0920 22:40:30.080291    6879 server.go:416] Version: v1.20.0
	Sep 20 22:40:30 old-k8s-version-593412 kubelet[6879]: I0920 22:40:30.080834    6879 server.go:837] Client rotation is on, will bootstrap in background
	Sep 20 22:40:30 old-k8s-version-593412 kubelet[6879]: I0920 22:40:30.083281    6879 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Sep 20 22:40:30 old-k8s-version-593412 kubelet[6879]: W0920 22:40:30.084701    6879 manager.go:159] Cannot detect current cgroup on cgroup v2
	Sep 20 22:40:30 old-k8s-version-593412 kubelet[6879]: I0920 22:40:30.084848    6879 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-593412 -n old-k8s-version-593412
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-593412 -n old-k8s-version-593412: exit status 2 (228.708108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-593412" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (160.15s)

                                                
                                    

Test pass (250/317)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.32
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 6.32
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.59
22 TestOffline 58.26
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 129.07
31 TestAddons/serial/GCPAuth/Namespaces 2.81
35 TestAddons/parallel/InspektorGadget 10.87
38 TestAddons/parallel/CSI 53.94
39 TestAddons/parallel/Headlamp 19.8
40 TestAddons/parallel/CloudSpanner 6.6
41 TestAddons/parallel/LocalPath 45.23
42 TestAddons/parallel/NvidiaDevicePlugin 6.14
43 TestAddons/parallel/Yakd 11.64
44 TestAddons/StoppedEnableDisable 7.57
45 TestCertOptions 52.24
46 TestCertExpiration 256.92
48 TestForceSystemdFlag 57.2
49 TestForceSystemdEnv 102.74
51 TestKVMDriverInstallOrUpdate 1.24
55 TestErrorSpam/setup 44.38
56 TestErrorSpam/start 0.34
57 TestErrorSpam/status 0.72
58 TestErrorSpam/pause 1.6
59 TestErrorSpam/unpause 1.73
60 TestErrorSpam/stop 5.57
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 82.49
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 367.41
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.57
72 TestFunctional/serial/CacheCmd/cache/add_local 1.12
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.71
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.11
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 57.02
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.59
83 TestFunctional/serial/LogsFileCmd 1.56
84 TestFunctional/serial/InvalidService 5.01
86 TestFunctional/parallel/ConfigCmd 0.35
87 TestFunctional/parallel/DashboardCmd 30.21
88 TestFunctional/parallel/DryRun 0.27
89 TestFunctional/parallel/InternationalLanguage 0.14
90 TestFunctional/parallel/StatusCmd 1.37
94 TestFunctional/parallel/ServiceCmdConnect 6.72
95 TestFunctional/parallel/AddonsCmd 0.21
96 TestFunctional/parallel/PersistentVolumeClaim 41.37
98 TestFunctional/parallel/SSHCmd 0.39
99 TestFunctional/parallel/CpCmd 1.45
100 TestFunctional/parallel/MySQL 27.16
101 TestFunctional/parallel/FileSync 0.22
102 TestFunctional/parallel/CertSync 1.59
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.42
110 TestFunctional/parallel/License 0.2
111 TestFunctional/parallel/ServiceCmd/DeployApp 12.22
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.7
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.38
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
118 TestFunctional/parallel/ImageCommands/ImageBuild 4.14
119 TestFunctional/parallel/ImageCommands/Setup 0.41
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.25
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
125 TestFunctional/parallel/ProfileCmd/profile_list 0.6
126 TestFunctional/parallel/MountCmd/any-port 7.78
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.36
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.01
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.53
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.96
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.56
143 TestFunctional/parallel/MountCmd/specific-port 1.68
144 TestFunctional/parallel/ServiceCmd/List 0.3
145 TestFunctional/parallel/MountCmd/VerifyCleanup 0.77
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.32
148 TestFunctional/parallel/ServiceCmd/Format 0.5
149 TestFunctional/parallel/ServiceCmd/URL 0.38
150 TestFunctional/delete_echo-server_images 0.04
151 TestFunctional/delete_my-image_image 0.02
152 TestFunctional/delete_minikube_cached_images 0.02
156 TestMultiControlPlane/serial/StartCluster 200.54
157 TestMultiControlPlane/serial/DeployApp 5.98
158 TestMultiControlPlane/serial/PingHostFromPods 1.22
159 TestMultiControlPlane/serial/AddWorkerNode 56.41
160 TestMultiControlPlane/serial/NodeLabels 0.07
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.88
162 TestMultiControlPlane/serial/CopyFile 12.7
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.14
168 TestMultiControlPlane/serial/DeleteSecondaryNode 16.81
169 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
171 TestMultiControlPlane/serial/RestartCluster 351.1
172 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
173 TestMultiControlPlane/serial/AddSecondaryNode 73.77
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
178 TestJSONOutput/start/Command 84.04
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.73
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.63
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 7.33
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.19
206 TestMainNoArgs 0.04
207 TestMinikubeProfile 91.2
210 TestMountStart/serial/StartWithMountFirst 24.96
211 TestMountStart/serial/VerifyMountFirst 0.38
212 TestMountStart/serial/StartWithMountSecond 29.1
213 TestMountStart/serial/VerifyMountSecond 0.36
214 TestMountStart/serial/DeleteFirst 0.67
215 TestMountStart/serial/VerifyMountPostDelete 0.36
216 TestMountStart/serial/Stop 1.27
217 TestMountStart/serial/RestartStopped 24.04
218 TestMountStart/serial/VerifyMountPostStop 0.37
221 TestMultiNode/serial/FreshStart2Nodes 112.97
222 TestMultiNode/serial/DeployApp2Nodes 5.12
223 TestMultiNode/serial/PingHostFrom2Pods 0.79
224 TestMultiNode/serial/AddNode 50.02
225 TestMultiNode/serial/MultiNodeLabels 0.06
226 TestMultiNode/serial/ProfileList 0.58
227 TestMultiNode/serial/CopyFile 7.15
228 TestMultiNode/serial/StopNode 2.29
229 TestMultiNode/serial/StartAfterStop 39.15
231 TestMultiNode/serial/DeleteNode 2.18
233 TestMultiNode/serial/RestartMultiNode 178.87
234 TestMultiNode/serial/ValidateNameConflict 44.85
241 TestScheduledStopUnix 113.61
245 TestRunningBinaryUpgrade 174.97
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
251 TestNoKubernetes/serial/StartWithK8s 126.25
252 TestStoppedBinaryUpgrade/Setup 1.11
253 TestStoppedBinaryUpgrade/Upgrade 147.47
254 TestNoKubernetes/serial/StartWithStopK8s 39.71
255 TestNoKubernetes/serial/Start 46.18
256 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
265 TestPause/serial/Start 88.56
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
267 TestNoKubernetes/serial/ProfileList 1.69
268 TestNoKubernetes/serial/Stop 1.54
269 TestNoKubernetes/serial/StartNoArgs 43.99
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
271 TestPause/serial/SecondStartNoReconfiguration 48.42
272 TestPause/serial/Pause 0.89
273 TestPause/serial/VerifyStatus 0.29
274 TestPause/serial/Unpause 0.72
275 TestPause/serial/PauseAgain 1.08
276 TestPause/serial/DeletePaused 1.28
277 TestPause/serial/VerifyDeletedResources 0.65
285 TestNetworkPlugins/group/false 3.31
292 TestStartStop/group/no-preload/serial/FirstStart 78.54
294 TestStartStop/group/embed-certs/serial/FirstStart 103.45
295 TestStartStop/group/no-preload/serial/DeployApp 10.31
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1
298 TestStartStop/group/embed-certs/serial/DeployApp 10.29
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.97
301 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
303 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.32
304 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.95
307 TestStartStop/group/no-preload/serial/SecondStart 681.6
311 TestStartStop/group/embed-certs/serial/SecondStart 530.31
313 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 526.02
314 TestStartStop/group/old-k8s-version/serial/Stop 6.3
315 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/newest-cni/serial/FirstStart 49.48
327 TestNetworkPlugins/group/auto/Start 83.38
328 TestNetworkPlugins/group/kindnet/Start 65.11
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
331 TestStartStop/group/newest-cni/serial/Stop 10.61
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
333 TestStartStop/group/newest-cni/serial/SecondStart 50.88
334 TestNetworkPlugins/group/auto/KubeletFlags 0.22
335 TestNetworkPlugins/group/auto/NetCatPod 12.29
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
339 TestStartStop/group/newest-cni/serial/Pause 2.49
340 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
341 TestNetworkPlugins/group/calico/Start 75.4
342 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
343 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
344 TestNetworkPlugins/group/auto/DNS 0.23
345 TestNetworkPlugins/group/auto/Localhost 0.17
346 TestNetworkPlugins/group/auto/HairPin 0.15
347 TestNetworkPlugins/group/kindnet/DNS 0.17
348 TestNetworkPlugins/group/kindnet/Localhost 0.13
349 TestNetworkPlugins/group/kindnet/HairPin 0.15
350 TestNetworkPlugins/group/custom-flannel/Start 81.6
351 TestNetworkPlugins/group/enable-default-cni/Start 118.74
352 TestNetworkPlugins/group/flannel/Start 116.18
353 TestNetworkPlugins/group/calico/ControllerPod 6.01
354 TestNetworkPlugins/group/calico/KubeletFlags 0.23
355 TestNetworkPlugins/group/calico/NetCatPod 12.23
356 TestNetworkPlugins/group/calico/DNS 0.21
357 TestNetworkPlugins/group/calico/Localhost 0.14
358 TestNetworkPlugins/group/calico/HairPin 0.15
359 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
360 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
361 TestNetworkPlugins/group/bridge/Start 55.36
362 TestNetworkPlugins/group/custom-flannel/DNS 0.22
363 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
364 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
365 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
366 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.24
367 TestNetworkPlugins/group/flannel/ControllerPod 6.01
368 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
369 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
370 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
371 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
372 TestNetworkPlugins/group/flannel/NetCatPod 11.24
373 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
374 TestNetworkPlugins/group/bridge/NetCatPod 11.26
375 TestNetworkPlugins/group/flannel/DNS 0.19
376 TestNetworkPlugins/group/flannel/Localhost 0.17
377 TestNetworkPlugins/group/flannel/HairPin 0.16
378 TestNetworkPlugins/group/bridge/DNS 21.44
379 TestNetworkPlugins/group/bridge/Localhost 0.12
380 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (9.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-437637 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-437637 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.318892777s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 20:47:49.221758   16785 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0920 20:47:49.221856   16785 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-437637
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-437637: exit status 85 (55.103834ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-437637 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |          |
	|         | -p download-only-437637        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:47:39
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:47:39.939434   16797 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:47:39.939567   16797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:39.939576   16797 out.go:358] Setting ErrFile to fd 2...
	I0920 20:47:39.939580   16797 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:39.939782   16797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	W0920 20:47:39.939929   16797 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19672-9614/.minikube/config/config.json: open /home/jenkins/minikube-integration/19672-9614/.minikube/config/config.json: no such file or directory
	I0920 20:47:39.940558   16797 out.go:352] Setting JSON to true
	I0920 20:47:39.941504   16797 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1808,"bootTime":1726863452,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:47:39.941605   16797 start.go:139] virtualization: kvm guest
	I0920 20:47:39.944246   16797 out.go:97] [download-only-437637] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0920 20:47:39.944381   16797 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 20:47:39.944420   16797 notify.go:220] Checking for updates...
	I0920 20:47:39.945672   16797 out.go:169] MINIKUBE_LOCATION=19672
	I0920 20:47:39.947165   16797 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:47:39.948639   16797 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 20:47:39.950105   16797 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 20:47:39.951435   16797 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 20:47:39.954073   16797 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 20:47:39.954328   16797 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:47:40.053943   16797 out.go:97] Using the kvm2 driver based on user configuration
	I0920 20:47:40.053966   16797 start.go:297] selected driver: kvm2
	I0920 20:47:40.053972   16797 start.go:901] validating driver "kvm2" against <nil>
	I0920 20:47:40.054318   16797 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:40.054433   16797 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 20:47:40.069886   16797 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 20:47:40.069960   16797 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 20:47:40.070743   16797 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0920 20:47:40.070942   16797 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 20:47:40.071000   16797 cni.go:84] Creating CNI manager for ""
	I0920 20:47:40.071061   16797 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 20:47:40.071073   16797 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 20:47:40.071140   16797 start.go:340] cluster config:
	{Name:download-only-437637 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-437637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:47:40.071379   16797 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:40.073537   16797 out.go:97] Downloading VM boot image ...
	I0920 20:47:40.073589   16797 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/iso/amd64/minikube-v1.34.0-1726784654-19672-amd64.iso
	I0920 20:47:43.981089   16797 out.go:97] Starting "download-only-437637" primary control-plane node in "download-only-437637" cluster
	I0920 20:47:43.981119   16797 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 20:47:44.003159   16797 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0920 20:47:44.003183   16797 cache.go:56] Caching tarball of preloaded images
	I0920 20:47:44.003341   16797 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0920 20:47:44.005093   16797 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 20:47:44.005118   16797 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0920 20:47:44.028523   16797 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-437637 host does not exist
	  To start a cluster, run: "minikube start -p download-only-437637"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-437637
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-454496 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-454496 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.318779339s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 20:47:55.848603   16785 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0920 20:47:55.848645   16785 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-454496
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-454496: exit status 85 (56.26253ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-437637 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | -p download-only-437637        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| delete  | -p download-only-437637        | download-only-437637 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC | 20 Sep 24 20:47 UTC |
	| start   | -o=json --download-only        | download-only-454496 | jenkins | v1.34.0 | 20 Sep 24 20:47 UTC |                     |
	|         | -p download-only-454496        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:47:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:47:49.565577   17015 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:47:49.565682   17015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:49.565693   17015 out.go:358] Setting ErrFile to fd 2...
	I0920 20:47:49.565698   17015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:47:49.565874   17015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 20:47:49.566455   17015 out.go:352] Setting JSON to true
	I0920 20:47:49.567311   17015 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1818,"bootTime":1726863452,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 20:47:49.567407   17015 start.go:139] virtualization: kvm guest
	I0920 20:47:49.569821   17015 out.go:97] [download-only-454496] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 20:47:49.569980   17015 notify.go:220] Checking for updates...
	I0920 20:47:49.571502   17015 out.go:169] MINIKUBE_LOCATION=19672
	I0920 20:47:49.572916   17015 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:47:49.574075   17015 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 20:47:49.575480   17015 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 20:47:49.576850   17015 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 20:47:49.579229   17015 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 20:47:49.579455   17015 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:47:49.611097   17015 out.go:97] Using the kvm2 driver based on user configuration
	I0920 20:47:49.611132   17015 start.go:297] selected driver: kvm2
	I0920 20:47:49.611140   17015 start.go:901] validating driver "kvm2" against <nil>
	I0920 20:47:49.611479   17015 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:49.611570   17015 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19672-9614/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0920 20:47:49.626830   17015 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I0920 20:47:49.626883   17015 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 20:47:49.627415   17015 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0920 20:47:49.627571   17015 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 20:47:49.627599   17015 cni.go:84] Creating CNI manager for ""
	I0920 20:47:49.627659   17015 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0920 20:47:49.627668   17015 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 20:47:49.627734   17015 start.go:340] cluster config:
	{Name:download-only-454496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-454496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:47:49.627839   17015 iso.go:125] acquiring lock: {Name:mk1ccde703058df9ac74e8f7388117bc62d90ea2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:47:49.629737   17015 out.go:97] Starting "download-only-454496" primary control-plane node in "download-only-454496" cluster
	I0920 20:47:49.629759   17015 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 20:47:49.651397   17015 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 20:47:49.651430   17015 cache.go:56] Caching tarball of preloaded images
	I0920 20:47:49.651597   17015 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0920 20:47:49.653549   17015 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 20:47:49.653572   17015 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0920 20:47:49.679002   17015 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0920 20:47:54.549192   17015 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0920 20:47:54.549286   17015 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19672-9614/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-454496 host does not exist
	  To start a cluster, run: "minikube start -p download-only-454496"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-454496
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 20:47:56.407026   16785 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-545297 --alsologtostderr --binary-mirror http://127.0.0.1:41175 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-545297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-545297
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (58.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-598655 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-598655 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (57.098742126s)
helpers_test.go:175: Cleaning up "offline-crio-598655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-598655
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-598655: (1.163900746s)
--- PASS: TestOffline (58.26s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-628754
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-628754: exit status 85 (48.6862ms)

                                                
                                                
-- stdout --
	* Profile "addons-628754" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-628754"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-628754
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-628754: exit status 85 (47.761ms)

                                                
                                                
-- stdout --
	* Profile "addons-628754" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-628754"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (129.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-628754 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-628754 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m9.07389965s)
--- PASS: TestAddons/Setup (129.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.81s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-628754 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-628754 get secret gcp-auth -n new-namespace
addons_test.go:608: (dbg) Non-zero exit: kubectl --context addons-628754 get secret gcp-auth -n new-namespace: exit status 1 (102.941157ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:600: (dbg) Run:  kubectl --context addons-628754 logs -l app=gcp-auth -n gcp-auth
I0920 20:50:06.354473   16785 retry.go:31] will retry after 2.519587879s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2024/09/20 20:50:05 GCP Auth Webhook started!
	2024/09/20 20:50:06 Ready to marshal response ...
	2024/09/20 20:50:06 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:608: (dbg) Run:  kubectl --context addons-628754 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.81s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jrph6" [ef0dcf62-bd0a-4732-ba4d-9c2c66259225] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00763185s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-628754
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-628754: (5.865905545s)
--- PASS: TestAddons/parallel/InspektorGadget (10.87s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.94s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 20:58:31.617042   16785 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 20:58:31.621816   16785 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 20:58:31.621839   16785 kapi.go:107] duration metric: took 4.826948ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.833742ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-628754 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-628754 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4f0506be-6d86-4213-9171-c78b7c3c0a04] Pending
helpers_test.go:344: "task-pv-pod" [4f0506be-6d86-4213-9171-c78b7c3c0a04] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4f0506be-6d86-4213-9171-c78b7c3c0a04] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004582582s
addons_test.go:528: (dbg) Run:  kubectl --context addons-628754 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-628754 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-628754 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-628754 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-628754 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-628754 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-628754 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c2907e84-fb91-453f-a85e-eb4cdcd79871] Pending
helpers_test.go:344: "task-pv-pod-restore" [c2907e84-fb91-453f-a85e-eb4cdcd79871] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c2907e84-fb91-453f-a85e-eb4cdcd79871] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003794917s
addons_test.go:570: (dbg) Run:  kubectl --context addons-628754 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-628754 delete pod task-pv-pod-restore: (1.576570166s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-628754 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-628754 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-628754 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.036127378s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:586: (dbg) Done: out/minikube-linux-amd64 -p addons-628754 addons disable volumesnapshots --alsologtostderr -v=1: (1.312145253s)
--- PASS: TestAddons/parallel/CSI (53.94s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-628754 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-c96nw" [5b92a042-29e9-42ae-974d-8358d97cfaad] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-c96nw" [5b92a042-29e9-42ae-974d-8358d97cfaad] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-c96nw" [5b92a042-29e9-42ae-974d-8358d97cfaad] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004758526s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-628754 addons disable headlamp --alsologtostderr -v=1: (5.860389839s)
--- PASS: TestAddons/parallel/Headlamp (19.80s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-vnqvr" [40403144-ba4f-401d-a404-52244ac73d8a] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003867746s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-628754
--- PASS: TestAddons/parallel/CloudSpanner (6.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (45.23s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-628754 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-628754 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-628754 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [fcb0e17b-2ae8-47b7-973b-d7151554930f] Pending
helpers_test.go:344: "test-local-path" [fcb0e17b-2ae8-47b7-973b-d7151554930f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [fcb0e17b-2ae8-47b7-973b-d7151554930f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [fcb0e17b-2ae8-47b7-973b-d7151554930f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.008409999s
addons_test.go:938: (dbg) Run:  kubectl --context addons-628754 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 ssh "cat /opt/local-path-provisioner/pvc-3e8f244a-7a2e-48ce-baa7-c2ca4ae064da_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-628754 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-628754 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-628754 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (36.280838819s)
--- PASS: TestAddons/parallel/LocalPath (45.23s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.14s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-whxlq" [08b37854-27d3-47e3-b7b8-1eece2ad9368] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.041594217s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-628754
addons_test.go:1002: (dbg) Done: out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-628754: (1.094844339s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.14s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-ltfz9" [cba2c19a-17fb-4c0d-a639-5c2201de584d] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.041507007s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-628754 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-628754 addons disable yakd --alsologtostderr -v=1: (6.595457639s)
--- PASS: TestAddons/parallel/Yakd (11.64s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (7.57s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-628754
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-628754: (7.297406365s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-628754
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-628754
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-628754
--- PASS: TestAddons/StoppedEnableDisable (7.57s)

                                                
                                    
x
+
TestCertOptions (52.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-335786 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0920 22:10:09.132509   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-335786 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (50.752377343s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-335786 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-335786 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-335786 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-335786" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-335786
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-335786: (1.00356827s)
--- PASS: TestCertOptions (52.24s)

                                                
                                    
x
+
TestCertExpiration (256.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-673228 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0920 22:08:54.878684   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-673228 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (46.904232079s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-673228 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-673228 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (29.014095234s)
helpers_test.go:175: Cleaning up "cert-expiration-673228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-673228
--- PASS: TestCertExpiration (256.92s)

                                                
                                    
x
+
TestForceSystemdFlag (57.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-564958 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0920 22:09:11.809448   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-564958 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (56.229734718s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-564958 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-564958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-564958
--- PASS: TestForceSystemdFlag (57.20s)

                                                
                                    
x
+
TestForceSystemdEnv (102.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-638205 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-638205 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m41.98435722s)
helpers_test.go:175: Cleaning up "force-systemd-env-638205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-638205
--- PASS: TestForceSystemdEnv (102.74s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.24s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0920 22:10:30.285258   16785 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 22:10:30.285382   16785 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0920 22:10:30.315991   16785 install.go:62] docker-machine-driver-kvm2: exit status 1
W0920 22:10:30.316397   16785 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 22:10:30.316486   16785 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3349954476/001/docker-machine-driver-kvm2
I0920 22:10:30.513161   16785 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3349954476/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000619ce0 gz:0xc000619ce8 tar:0xc000619c90 tar.bz2:0xc000619ca0 tar.gz:0xc000619cb0 tar.xz:0xc000619cc0 tar.zst:0xc000619cd0 tbz2:0xc000619ca0 tgz:0xc000619cb0 txz:0xc000619cc0 tzst:0xc000619cd0 xz:0xc000619cf0 zip:0xc000619d70 zst:0xc000619cf8] Getters:map[file:0xc0007fc4d0 http:0xc0008923c0 https:0xc000892410] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 22:10:30.513220   16785 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3349954476/001/docker-machine-driver-kvm2
I0920 22:10:31.072839   16785 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 22:10:31.072969   16785 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0920 22:10:31.100603   16785 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0920 22:10:31.100636   16785 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0920 22:10:31.100711   16785 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 22:10:31.100741   16785 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3349954476/002/docker-machine-driver-kvm2
I0920 22:10:31.126555   16785 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3349954476/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000619ce0 gz:0xc000619ce8 tar:0xc000619c90 tar.bz2:0xc000619ca0 tar.gz:0xc000619cb0 tar.xz:0xc000619cc0 tar.zst:0xc000619cd0 tbz2:0xc000619ca0 tgz:0xc000619cb0 txz:0xc000619cc0 tzst:0xc000619cd0 xz:0xc000619cf0 zip:0xc000619d70 zst:0xc000619cf8] Getters:map[file:0xc0018b3860 http:0xc00028c460 https:0xc00028c4b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0920 22:10:31.126607   16785 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3349954476/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.24s)

                                                
                                    
x
+
TestErrorSpam/setup (44.38s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-386576 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-386576 --driver=kvm2  --container-runtime=crio
E0920 21:05:09.137370   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:05:09.143732   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:05:09.155074   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:05:09.176574   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:05:09.218066   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:05:09.299537   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:05:09.461109   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:05:09.782818   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:05:10.425132   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:05:11.706731   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:05:14.268192   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-386576 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-386576 --driver=kvm2  --container-runtime=crio: (44.379362153s)
--- PASS: TestErrorSpam/setup (44.38s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 status
E0920 21:05:19.389590   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 pause
--- PASS: TestErrorSpam/pause (1.60s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (5.57s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 stop: (2.313102706s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 stop: (1.339283163s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-386576 --log_dir /tmp/nospam-386576 stop: (1.914821729s)
--- PASS: TestErrorSpam/stop (5.57s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19672-9614/.minikube/files/etc/test/nested/copy/16785/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-523822 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E0920 21:05:29.631041   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:05:50.112830   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:06:31.075352   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-523822 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m22.492783361s)
--- PASS: TestFunctional/serial/StartWithProxy (82.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (367.41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 21:06:51.800184   16785 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-523822 --alsologtostderr -v=8
E0920 21:07:52.999555   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:10:09.131804   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:10:36.841424   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-523822 --alsologtostderr -v=8: (6m7.404134027s)
functional_test.go:663: soft start took 6m7.404838909s for "functional-523822" cluster.
I0920 21:12:59.204934   16785 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (367.41s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-523822 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-523822 cache add registry.k8s.io/pause:3.1: (1.254254413s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-523822 cache add registry.k8s.io/pause:3.3: (1.170710484s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-523822 cache add registry.k8s.io/pause:latest: (1.146222619s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-523822 /tmp/TestFunctionalserialCacheCmdcacheadd_local2220815873/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 cache add minikube-local-cache-test:functional-523822
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 cache delete minikube-local-cache-test:functional-523822
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-523822
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-523822 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (215.198562ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-523822 cache reload: (1.012674347s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 kubectl -- --context functional-523822 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-523822 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (57.02s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-523822 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-523822 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (57.020271021s)
functional_test.go:761: restart took 57.020420977s for "functional-523822" cluster.
I0920 21:14:03.365542   16785 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (57.02s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-523822 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-523822 logs: (1.589898059s)
--- PASS: TestFunctional/serial/LogsCmd (1.59s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 logs --file /tmp/TestFunctionalserialLogsFileCmd2662191056/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-523822 logs --file /tmp/TestFunctionalserialLogsFileCmd2662191056/001/logs.txt: (1.562200279s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-523822 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-523822
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-523822: exit status 115 (275.008061ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.40:30838 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-523822 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-523822 delete -f testdata/invalidsvc.yaml: (1.538863225s)
--- PASS: TestFunctional/serial/InvalidService (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-523822 config get cpus: exit status 14 (56.851322ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-523822 config get cpus: exit status 14 (46.499225ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (30.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-523822 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-523822 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28898: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (30.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-523822 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-523822 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (133.288622ms)

                                                
                                                
-- stdout --
	* [functional-523822] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:14:21.110995   28020 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:14:21.111240   28020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:14:21.111249   28020 out.go:358] Setting ErrFile to fd 2...
	I0920 21:14:21.111253   28020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:14:21.111447   28020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 21:14:21.111968   28020 out.go:352] Setting JSON to false
	I0920 21:14:21.112897   28020 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3409,"bootTime":1726863452,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:14:21.113006   28020 start.go:139] virtualization: kvm guest
	I0920 21:14:21.115412   28020 out.go:177] * [functional-523822] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 21:14:21.116989   28020 notify.go:220] Checking for updates...
	I0920 21:14:21.117002   28020 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:14:21.118504   28020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:14:21.119987   28020 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:14:21.121193   28020 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:14:21.122280   28020 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:14:21.123329   28020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:14:21.124775   28020 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:14:21.125209   28020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:14:21.125260   28020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:14:21.140485   28020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36419
	I0920 21:14:21.140855   28020 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:14:21.141413   28020 main.go:141] libmachine: Using API Version  1
	I0920 21:14:21.141432   28020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:14:21.141721   28020 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:14:21.141908   28020 main.go:141] libmachine: (functional-523822) Calling .DriverName
	I0920 21:14:21.142159   28020 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:14:21.142492   28020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:14:21.142535   28020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:14:21.157516   28020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38087
	I0920 21:14:21.157907   28020 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:14:21.158451   28020 main.go:141] libmachine: Using API Version  1
	I0920 21:14:21.158476   28020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:14:21.158848   28020 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:14:21.159053   28020 main.go:141] libmachine: (functional-523822) Calling .DriverName
	I0920 21:14:21.195611   28020 out.go:177] * Using the kvm2 driver based on existing profile
	I0920 21:14:21.197144   28020 start.go:297] selected driver: kvm2
	I0920 21:14:21.197157   28020 start.go:901] validating driver "kvm2" against &{Name:functional-523822 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-523822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:14:21.197285   28020 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:14:21.199823   28020 out.go:201] 
	W0920 21:14:21.200996   28020 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 21:14:21.202191   28020 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-523822 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-523822 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-523822 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (143.398513ms)

                                                
                                                
-- stdout --
	* [functional-523822] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:14:21.388992   28117 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:14:21.389136   28117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:14:21.389145   28117 out.go:358] Setting ErrFile to fd 2...
	I0920 21:14:21.389150   28117 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:14:21.389445   28117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 21:14:21.389989   28117 out.go:352] Setting JSON to false
	I0920 21:14:21.390921   28117 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":3409,"bootTime":1726863452,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 21:14:21.391014   28117 start.go:139] virtualization: kvm guest
	I0920 21:14:21.393161   28117 out.go:177] * [functional-523822] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0920 21:14:21.394368   28117 notify.go:220] Checking for updates...
	I0920 21:14:21.394377   28117 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 21:14:21.395615   28117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 21:14:21.396825   28117 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 21:14:21.397838   28117 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 21:14:21.398901   28117 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 21:14:21.399965   28117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 21:14:21.401351   28117 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:14:21.401774   28117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:14:21.401862   28117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:14:21.417046   28117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37011
	I0920 21:14:21.417519   28117 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:14:21.418077   28117 main.go:141] libmachine: Using API Version  1
	I0920 21:14:21.418101   28117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:14:21.418435   28117 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:14:21.418617   28117 main.go:141] libmachine: (functional-523822) Calling .DriverName
	I0920 21:14:21.418877   28117 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 21:14:21.419338   28117 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:14:21.419382   28117 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:14:21.436699   28117 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45195
	I0920 21:14:21.437044   28117 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:14:21.437655   28117 main.go:141] libmachine: Using API Version  1
	I0920 21:14:21.437686   28117 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:14:21.438024   28117 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:14:21.438205   28117 main.go:141] libmachine: (functional-523822) Calling .DriverName
	I0920 21:14:21.479908   28117 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0920 21:14:21.481062   28117 start.go:297] selected driver: kvm2
	I0920 21:14:21.481078   28117 start.go:901] validating driver "kvm2" against &{Name:functional-523822 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19672/minikube-v1.34.0-1726784654-19672-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-523822 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.40 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 21:14:21.481223   28117 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 21:14:21.483557   28117 out.go:201] 
	W0920 21:14:21.484730   28117 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 21:14:21.485871   28117 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-523822 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-523822 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-k52lm" [511a6cbe-f1fa-4592-87c3-875ee58c3f00] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-k52lm" [511a6cbe-f1fa-4592-87c3-875ee58c3f00] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.025742357s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.40:31267
functional_test.go:1675: http://192.168.39.40:31267: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-k52lm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.40:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.40:31267
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [30d1c1b2-d3db-4b2b-b003-c5cd1a8278de] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005013535s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-523822 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-523822 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-523822 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-523822 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-523822 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [742f4f67-ae78-4ede-98fe-2b267204ad22] Pending
helpers_test.go:344: "sp-pod" [742f4f67-ae78-4ede-98fe-2b267204ad22] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [742f4f67-ae78-4ede-98fe-2b267204ad22] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004631s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-523822 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-523822 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-523822 delete -f testdata/storage-provisioner/pod.yaml: (5.360567919s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-523822 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9e90c7f6-8a5f-4f27-871b-4876f1a163f8] Pending
helpers_test.go:344: "sp-pod" [9e90c7f6-8a5f-4f27-871b-4876f1a163f8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9e90c7f6-8a5f-4f27-871b-4876f1a163f8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004767128s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-523822 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.37s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh -n functional-523822 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 cp functional-523822:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1542500797/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh -n functional-523822 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh -n functional-523822 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-523822 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-q4rxd" [cb511a5b-596a-44b7-a935-0091d9412d28] Pending
helpers_test.go:344: "mysql-6cdb49bbb-q4rxd" [cb511a5b-596a-44b7-a935-0091d9412d28] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-q4rxd" [cb511a5b-596a-44b7-a935-0091d9412d28] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.140659695s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-523822 exec mysql-6cdb49bbb-q4rxd -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-523822 exec mysql-6cdb49bbb-q4rxd -- mysql -ppassword -e "show databases;": exit status 1 (203.686988ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 21:14:49.396459   16785 retry.go:31] will retry after 1.284795403s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-523822 exec mysql-6cdb49bbb-q4rxd -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-523822 exec mysql-6cdb49bbb-q4rxd -- mysql -ppassword -e "show databases;": exit status 1 (142.711147ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 21:14:50.824411   16785 retry.go:31] will retry after 1.07423955s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-523822 exec mysql-6cdb49bbb-q4rxd -- mysql -ppassword -e "show databases;"
2024/09/20 21:14:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (27.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/16785/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "sudo cat /etc/test/nested/copy/16785/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/16785.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "sudo cat /etc/ssl/certs/16785.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/16785.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "sudo cat /usr/share/ca-certificates/16785.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/167852.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "sudo cat /etc/ssl/certs/167852.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/167852.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "sudo cat /usr/share/ca-certificates/167852.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-523822 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-523822 ssh "sudo systemctl is-active docker": exit status 1 (215.765493ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-523822 ssh "sudo systemctl is-active containerd": exit status 1 (208.030229ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-523822 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-523822 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-fz5n2" [0901b1d0-54fa-4c70-bc3c-6bd6a7fb16d2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-fz5n2" [0901b1d0-54fa-4c70-bc3c-6bd6a7fb16d2] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.004540233s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-523822 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-523822
localhost/kicbase/echo-server:functional-523822
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-523822 image ls --format short --alsologtostderr:
I0920 21:14:29.521007   29042 out.go:345] Setting OutFile to fd 1 ...
I0920 21:14:29.522078   29042 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:14:29.522134   29042 out.go:358] Setting ErrFile to fd 2...
I0920 21:14:29.522152   29042 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:14:29.522831   29042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
I0920 21:14:29.524937   29042 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 21:14:29.525084   29042 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 21:14:29.525492   29042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 21:14:29.525544   29042 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:14:29.542604   29042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38991
I0920 21:14:29.543108   29042 main.go:141] libmachine: () Calling .GetVersion
I0920 21:14:29.543796   29042 main.go:141] libmachine: Using API Version  1
I0920 21:14:29.543821   29042 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:14:29.544115   29042 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:14:29.544312   29042 main.go:141] libmachine: (functional-523822) Calling .GetState
I0920 21:14:29.546023   29042 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 21:14:29.546070   29042 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:14:29.560894   29042 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41461
I0920 21:14:29.561293   29042 main.go:141] libmachine: () Calling .GetVersion
I0920 21:14:29.561881   29042 main.go:141] libmachine: Using API Version  1
I0920 21:14:29.561906   29042 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:14:29.562250   29042 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:14:29.562473   29042 main.go:141] libmachine: (functional-523822) Calling .DriverName
I0920 21:14:29.562686   29042 ssh_runner.go:195] Run: systemctl --version
I0920 21:14:29.562717   29042 main.go:141] libmachine: (functional-523822) Calling .GetSSHHostname
I0920 21:14:29.565605   29042 main.go:141] libmachine: (functional-523822) DBG | domain functional-523822 has defined MAC address 52:54:00:75:35:40 in network mk-functional-523822
I0920 21:14:29.565922   29042 main.go:141] libmachine: (functional-523822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:35:40", ip: ""} in network mk-functional-523822: {Iface:virbr1 ExpiryTime:2024-09-20 22:05:44 +0000 UTC Type:0 Mac:52:54:00:75:35:40 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-523822 Clientid:01:52:54:00:75:35:40}
I0920 21:14:29.565957   29042 main.go:141] libmachine: (functional-523822) DBG | domain functional-523822 has defined IP address 192.168.39.40 and MAC address 52:54:00:75:35:40 in network mk-functional-523822
I0920 21:14:29.566117   29042 main.go:141] libmachine: (functional-523822) Calling .GetSSHPort
I0920 21:14:29.566301   29042 main.go:141] libmachine: (functional-523822) Calling .GetSSHKeyPath
I0920 21:14:29.566454   29042 main.go:141] libmachine: (functional-523822) Calling .GetSSHUsername
I0920 21:14:29.566574   29042 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/functional-523822/id_rsa Username:docker}
I0920 21:14:29.683693   29042 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 21:14:29.756313   29042 main.go:141] libmachine: Making call to close driver server
I0920 21:14:29.756328   29042 main.go:141] libmachine: (functional-523822) Calling .Close
I0920 21:14:29.756600   29042 main.go:141] libmachine: (functional-523822) DBG | Closing plugin on server side
I0920 21:14:29.756657   29042 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:14:29.756670   29042 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 21:14:29.756681   29042 main.go:141] libmachine: Making call to close driver server
I0920 21:14:29.756691   29042 main.go:141] libmachine: (functional-523822) Calling .Close
I0920 21:14:29.757010   29042 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:14:29.757030   29042 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 21:14:29.757034   29042 main.go:141] libmachine: (functional-523822) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-523822 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-523822  | fa8eba5e11cc0 | 3.33kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| localhost/kicbase/echo-server           | functional-523822  | 9056ab77afb8e | 4.94MB |
| localhost/my-image                      | functional-523822  | 15b060aee32f4 | 1.47MB |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-523822 image ls --format table --alsologtostderr:
I0920 21:14:34.582362   29215 out.go:345] Setting OutFile to fd 1 ...
I0920 21:14:34.582683   29215 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:14:34.582696   29215 out.go:358] Setting ErrFile to fd 2...
I0920 21:14:34.582703   29215 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:14:34.583028   29215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
I0920 21:14:34.583883   29215 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 21:14:34.584073   29215 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 21:14:34.584645   29215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 21:14:34.584703   29215 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:14:34.600311   29215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37239
I0920 21:14:34.600874   29215 main.go:141] libmachine: () Calling .GetVersion
I0920 21:14:34.601527   29215 main.go:141] libmachine: Using API Version  1
I0920 21:14:34.601550   29215 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:14:34.601904   29215 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:14:34.602132   29215 main.go:141] libmachine: (functional-523822) Calling .GetState
I0920 21:14:34.604523   29215 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 21:14:34.604576   29215 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:14:34.619777   29215 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36301
I0920 21:14:34.620230   29215 main.go:141] libmachine: () Calling .GetVersion
I0920 21:14:34.620817   29215 main.go:141] libmachine: Using API Version  1
I0920 21:14:34.620841   29215 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:14:34.621179   29215 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:14:34.621388   29215 main.go:141] libmachine: (functional-523822) Calling .DriverName
I0920 21:14:34.621602   29215 ssh_runner.go:195] Run: systemctl --version
I0920 21:14:34.621628   29215 main.go:141] libmachine: (functional-523822) Calling .GetSSHHostname
I0920 21:14:34.624727   29215 main.go:141] libmachine: (functional-523822) DBG | domain functional-523822 has defined MAC address 52:54:00:75:35:40 in network mk-functional-523822
I0920 21:14:34.625203   29215 main.go:141] libmachine: (functional-523822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:35:40", ip: ""} in network mk-functional-523822: {Iface:virbr1 ExpiryTime:2024-09-20 22:05:44 +0000 UTC Type:0 Mac:52:54:00:75:35:40 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-523822 Clientid:01:52:54:00:75:35:40}
I0920 21:14:34.625225   29215 main.go:141] libmachine: (functional-523822) DBG | domain functional-523822 has defined IP address 192.168.39.40 and MAC address 52:54:00:75:35:40 in network mk-functional-523822
I0920 21:14:34.625348   29215 main.go:141] libmachine: (functional-523822) Calling .GetSSHPort
I0920 21:14:34.625512   29215 main.go:141] libmachine: (functional-523822) Calling .GetSSHKeyPath
I0920 21:14:34.625652   29215 main.go:141] libmachine: (functional-523822) Calling .GetSSHUsername
I0920 21:14:34.625785   29215 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/functional-523822/id_rsa Username:docker}
I0920 21:14:34.761653   29215 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 21:14:34.849464   29215 main.go:141] libmachine: Making call to close driver server
I0920 21:14:34.849486   29215 main.go:141] libmachine: (functional-523822) Calling .Close
I0920 21:14:34.849798   29215 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:14:34.849820   29215 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 21:14:34.849805   29215 main.go:141] libmachine: (functional-523822) DBG | Closing plugin on server side
I0920 21:14:34.849843   29215 main.go:141] libmachine: Making call to close driver server
I0920 21:14:34.849913   29215 main.go:141] libmachine: (functional-523822) Calling .Close
I0920 21:14:34.850155   29215 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:14:34.850169   29215 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-523822 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],
"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"
size":"87190579"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d769
4bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-523822"],"size":"4943877"},{"id":"fa8eba5e11cc0c0c9caa286157d29b5e528202f4eb2121d0ff30f43c4fa5b9d4","repoDigests":["localhost/minikube-local-cache-test@sha256:fb9feb0cf554ffa1ac86e3f383a59439688a57a1ef5469c9d2a43249d9aebdab"],"repoTags":["localhost/minikube-local-cache-test:functional-523822"],"size":"3330"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f90
32cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfe
c52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"15b060aee32f4f8b9c4a28747bd3c6816cfcb2cdb46b3e6053aa3dd1fc9f8614","repoDigests":["localhost/my-image@sha256:7ff74918ac3eb8673d62534818dd30533d5a00e7643da007b09b3c5ac65d7e7c"],"re
poTags":["localhost/my-image:functional-523822"],"size":"1468600"},{"id":"b03dac2d9c158fab14c6d0b374fd701c77c670a1468c435d785f11b18e02f813","repoDigests":["docker.io/library/f69e0b192f1bf148df08fc1eddc840bdf9f3c0a158a6dbd2771c3c382f76d7cf-tmp@sha256:9aa74983a108ba2a0e1eda47e9884c2acacc09d87fc2464f66312c9e566fb9f9"],"repoTags":[],"size":"1466018"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-523822 image ls --format json --alsologtostderr:
I0920 21:14:34.198316   29192 out.go:345] Setting OutFile to fd 1 ...
I0920 21:14:34.198425   29192 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:14:34.198434   29192 out.go:358] Setting ErrFile to fd 2...
I0920 21:14:34.198438   29192 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:14:34.198611   29192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
I0920 21:14:34.199217   29192 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 21:14:34.199318   29192 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 21:14:34.199676   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 21:14:34.199717   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:14:34.215626   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34923
I0920 21:14:34.216139   29192 main.go:141] libmachine: () Calling .GetVersion
I0920 21:14:34.216810   29192 main.go:141] libmachine: Using API Version  1
I0920 21:14:34.216838   29192 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:14:34.217242   29192 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:14:34.217439   29192 main.go:141] libmachine: (functional-523822) Calling .GetState
I0920 21:14:34.219576   29192 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 21:14:34.219626   29192 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:14:34.234878   29192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44719
I0920 21:14:34.235449   29192 main.go:141] libmachine: () Calling .GetVersion
I0920 21:14:34.236011   29192 main.go:141] libmachine: Using API Version  1
I0920 21:14:34.236050   29192 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:14:34.236376   29192 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:14:34.236551   29192 main.go:141] libmachine: (functional-523822) Calling .DriverName
I0920 21:14:34.236788   29192 ssh_runner.go:195] Run: systemctl --version
I0920 21:14:34.236815   29192 main.go:141] libmachine: (functional-523822) Calling .GetSSHHostname
I0920 21:14:34.239851   29192 main.go:141] libmachine: (functional-523822) DBG | domain functional-523822 has defined MAC address 52:54:00:75:35:40 in network mk-functional-523822
I0920 21:14:34.240201   29192 main.go:141] libmachine: (functional-523822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:35:40", ip: ""} in network mk-functional-523822: {Iface:virbr1 ExpiryTime:2024-09-20 22:05:44 +0000 UTC Type:0 Mac:52:54:00:75:35:40 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-523822 Clientid:01:52:54:00:75:35:40}
I0920 21:14:34.240237   29192 main.go:141] libmachine: (functional-523822) DBG | domain functional-523822 has defined IP address 192.168.39.40 and MAC address 52:54:00:75:35:40 in network mk-functional-523822
I0920 21:14:34.240405   29192 main.go:141] libmachine: (functional-523822) Calling .GetSSHPort
I0920 21:14:34.240549   29192 main.go:141] libmachine: (functional-523822) Calling .GetSSHKeyPath
I0920 21:14:34.240686   29192 main.go:141] libmachine: (functional-523822) Calling .GetSSHUsername
I0920 21:14:34.240797   29192 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/functional-523822/id_rsa Username:docker}
I0920 21:14:34.344178   29192 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 21:14:34.528803   29192 main.go:141] libmachine: Making call to close driver server
I0920 21:14:34.528815   29192 main.go:141] libmachine: (functional-523822) Calling .Close
I0920 21:14:34.529099   29192 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:14:34.529118   29192 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 21:14:34.529132   29192 main.go:141] libmachine: Making call to close driver server
I0920 21:14:34.529140   29192 main.go:141] libmachine: (functional-523822) Calling .Close
I0920 21:14:34.529161   29192 main.go:141] libmachine: (functional-523822) DBG | Closing plugin on server side
I0920 21:14:34.529350   29192 main.go:141] libmachine: (functional-523822) DBG | Closing plugin on server side
I0920 21:14:34.529398   29192 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:14:34.529425   29192 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-523822 image ls --format yaml --alsologtostderr:
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-523822
size: "4943877"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: fa8eba5e11cc0c0c9caa286157d29b5e528202f4eb2121d0ff30f43c4fa5b9d4
repoDigests:
- localhost/minikube-local-cache-test@sha256:fb9feb0cf554ffa1ac86e3f383a59439688a57a1ef5469c9d2a43249d9aebdab
repoTags:
- localhost/minikube-local-cache-test:functional-523822
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-523822 image ls --format yaml --alsologtostderr:
I0920 21:14:29.805122   29075 out.go:345] Setting OutFile to fd 1 ...
I0920 21:14:29.805243   29075 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:14:29.805254   29075 out.go:358] Setting ErrFile to fd 2...
I0920 21:14:29.805260   29075 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:14:29.805462   29075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
I0920 21:14:29.806048   29075 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 21:14:29.806167   29075 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 21:14:29.806563   29075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 21:14:29.806616   29075 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:14:29.822031   29075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
I0920 21:14:29.822601   29075 main.go:141] libmachine: () Calling .GetVersion
I0920 21:14:29.823262   29075 main.go:141] libmachine: Using API Version  1
I0920 21:14:29.823285   29075 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:14:29.823676   29075 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:14:29.823895   29075 main.go:141] libmachine: (functional-523822) Calling .GetState
I0920 21:14:29.826000   29075 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 21:14:29.826056   29075 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:14:29.840925   29075 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39489
I0920 21:14:29.841459   29075 main.go:141] libmachine: () Calling .GetVersion
I0920 21:14:29.841952   29075 main.go:141] libmachine: Using API Version  1
I0920 21:14:29.841972   29075 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:14:29.842317   29075 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:14:29.842485   29075 main.go:141] libmachine: (functional-523822) Calling .DriverName
I0920 21:14:29.842677   29075 ssh_runner.go:195] Run: systemctl --version
I0920 21:14:29.842700   29075 main.go:141] libmachine: (functional-523822) Calling .GetSSHHostname
I0920 21:14:29.845547   29075 main.go:141] libmachine: (functional-523822) DBG | domain functional-523822 has defined MAC address 52:54:00:75:35:40 in network mk-functional-523822
I0920 21:14:29.845936   29075 main.go:141] libmachine: (functional-523822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:35:40", ip: ""} in network mk-functional-523822: {Iface:virbr1 ExpiryTime:2024-09-20 22:05:44 +0000 UTC Type:0 Mac:52:54:00:75:35:40 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-523822 Clientid:01:52:54:00:75:35:40}
I0920 21:14:29.845977   29075 main.go:141] libmachine: (functional-523822) DBG | domain functional-523822 has defined IP address 192.168.39.40 and MAC address 52:54:00:75:35:40 in network mk-functional-523822
I0920 21:14:29.846083   29075 main.go:141] libmachine: (functional-523822) Calling .GetSSHPort
I0920 21:14:29.846235   29075 main.go:141] libmachine: (functional-523822) Calling .GetSSHKeyPath
I0920 21:14:29.846345   29075 main.go:141] libmachine: (functional-523822) Calling .GetSSHUsername
I0920 21:14:29.846464   29075 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/functional-523822/id_rsa Username:docker}
I0920 21:14:29.937895   29075 ssh_runner.go:195] Run: sudo crictl images --output json
I0920 21:14:29.973227   29075 main.go:141] libmachine: Making call to close driver server
I0920 21:14:29.973242   29075 main.go:141] libmachine: (functional-523822) Calling .Close
I0920 21:14:29.973517   29075 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:14:29.973544   29075 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 21:14:29.973563   29075 main.go:141] libmachine: Making call to close driver server
I0920 21:14:29.973571   29075 main.go:141] libmachine: (functional-523822) Calling .Close
I0920 21:14:29.973517   29075 main.go:141] libmachine: (functional-523822) DBG | Closing plugin on server side
I0920 21:14:29.973803   29075 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:14:29.973823   29075 main.go:141] libmachine: (functional-523822) DBG | Closing plugin on server side
I0920 21:14:29.973827   29075 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-523822 ssh pgrep buildkitd: exit status 1 (195.533259ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image build -t localhost/my-image:functional-523822 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-523822 image build -t localhost/my-image:functional-523822 testdata/build --alsologtostderr: (3.340523433s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-523822 image build -t localhost/my-image:functional-523822 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b03dac2d9c1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-523822
--> 15b060aee32
Successfully tagged localhost/my-image:functional-523822
15b060aee32f4f8b9c4a28747bd3c6816cfcb2cdb46b3e6053aa3dd1fc9f8614
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-523822 image build -t localhost/my-image:functional-523822 testdata/build --alsologtostderr:
I0920 21:14:30.255813   29129 out.go:345] Setting OutFile to fd 1 ...
I0920 21:14:30.255959   29129 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:14:30.255969   29129 out.go:358] Setting ErrFile to fd 2...
I0920 21:14:30.255974   29129 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 21:14:30.256139   29129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
I0920 21:14:30.256738   29129 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 21:14:30.257284   29129 config.go:182] Loaded profile config "functional-523822": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0920 21:14:30.257652   29129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 21:14:30.257694   29129 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:14:30.273180   29129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45075
I0920 21:14:30.273695   29129 main.go:141] libmachine: () Calling .GetVersion
I0920 21:14:30.274422   29129 main.go:141] libmachine: Using API Version  1
I0920 21:14:30.274448   29129 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:14:30.274790   29129 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:14:30.275016   29129 main.go:141] libmachine: (functional-523822) Calling .GetState
I0920 21:14:30.277102   29129 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0920 21:14:30.277147   29129 main.go:141] libmachine: Launching plugin server for driver kvm2
I0920 21:14:30.292542   29129 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39199
I0920 21:14:30.293106   29129 main.go:141] libmachine: () Calling .GetVersion
I0920 21:14:30.293679   29129 main.go:141] libmachine: Using API Version  1
I0920 21:14:30.293706   29129 main.go:141] libmachine: () Calling .SetConfigRaw
I0920 21:14:30.294065   29129 main.go:141] libmachine: () Calling .GetMachineName
I0920 21:14:30.294252   29129 main.go:141] libmachine: (functional-523822) Calling .DriverName
I0920 21:14:30.294440   29129 ssh_runner.go:195] Run: systemctl --version
I0920 21:14:30.294479   29129 main.go:141] libmachine: (functional-523822) Calling .GetSSHHostname
I0920 21:14:30.297427   29129 main.go:141] libmachine: (functional-523822) DBG | domain functional-523822 has defined MAC address 52:54:00:75:35:40 in network mk-functional-523822
I0920 21:14:30.297892   29129 main.go:141] libmachine: (functional-523822) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:35:40", ip: ""} in network mk-functional-523822: {Iface:virbr1 ExpiryTime:2024-09-20 22:05:44 +0000 UTC Type:0 Mac:52:54:00:75:35:40 Iaid: IPaddr:192.168.39.40 Prefix:24 Hostname:functional-523822 Clientid:01:52:54:00:75:35:40}
I0920 21:14:30.297932   29129 main.go:141] libmachine: (functional-523822) DBG | domain functional-523822 has defined IP address 192.168.39.40 and MAC address 52:54:00:75:35:40 in network mk-functional-523822
I0920 21:14:30.298076   29129 main.go:141] libmachine: (functional-523822) Calling .GetSSHPort
I0920 21:14:30.298231   29129 main.go:141] libmachine: (functional-523822) Calling .GetSSHKeyPath
I0920 21:14:30.298392   29129 main.go:141] libmachine: (functional-523822) Calling .GetSSHUsername
I0920 21:14:30.298522   29129 sshutil.go:53] new ssh client: &{IP:192.168.39.40 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/functional-523822/id_rsa Username:docker}
I0920 21:14:30.401572   29129 build_images.go:161] Building image from path: /tmp/build.3100744596.tar
I0920 21:14:30.401659   29129 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 21:14:30.413510   29129 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3100744596.tar
I0920 21:14:30.418524   29129 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3100744596.tar: stat -c "%s %y" /var/lib/minikube/build/build.3100744596.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3100744596.tar': No such file or directory
I0920 21:14:30.418562   29129 ssh_runner.go:362] scp /tmp/build.3100744596.tar --> /var/lib/minikube/build/build.3100744596.tar (3072 bytes)
I0920 21:14:30.461695   29129 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3100744596
I0920 21:14:30.476564   29129 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3100744596 -xf /var/lib/minikube/build/build.3100744596.tar
I0920 21:14:30.488603   29129 crio.go:315] Building image: /var/lib/minikube/build/build.3100744596
I0920 21:14:30.488668   29129 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-523822 /var/lib/minikube/build/build.3100744596 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0920 21:14:33.489540   29129 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-523822 /var/lib/minikube/build/build.3100744596 --cgroup-manager=cgroupfs: (3.000836873s)
I0920 21:14:33.489622   29129 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3100744596
I0920 21:14:33.517692   29129 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3100744596.tar
I0920 21:14:33.545889   29129 build_images.go:217] Built localhost/my-image:functional-523822 from /tmp/build.3100744596.tar
I0920 21:14:33.545929   29129 build_images.go:133] succeeded building to: functional-523822
I0920 21:14:33.545935   29129 build_images.go:134] failed building to: 
I0920 21:14:33.545962   29129 main.go:141] libmachine: Making call to close driver server
I0920 21:14:33.545975   29129 main.go:141] libmachine: (functional-523822) Calling .Close
I0920 21:14:33.546236   29129 main.go:141] libmachine: (functional-523822) DBG | Closing plugin on server side
I0920 21:14:33.546250   29129 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:14:33.546265   29129 main.go:141] libmachine: Making call to close connection to plugin binary
I0920 21:14:33.546274   29129 main.go:141] libmachine: Making call to close driver server
I0920 21:14:33.546282   29129 main.go:141] libmachine: (functional-523822) Calling .Close
I0920 21:14:33.546502   29129 main.go:141] libmachine: Successfully made call to close driver server
I0920 21:14:33.546512   29129 main.go:141] libmachine: (functional-523822) DBG | Closing plugin on server side
I0920 21:14:33.546519   29129 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-523822
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image load --daemon kicbase/echo-server:functional-523822 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-523822 image load --daemon kicbase/echo-server:functional-523822 --alsologtostderr: (2.165433788s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image ls
functional_test.go:451: (dbg) Done: out/minikube-linux-amd64 -p functional-523822 image ls: (1.087438659s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "544.786225ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "55.711765ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-523822 /tmp/TestFunctionalparallelMountCmdany-port3508856979/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726866854416950516" to /tmp/TestFunctionalparallelMountCmdany-port3508856979/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726866854416950516" to /tmp/TestFunctionalparallelMountCmdany-port3508856979/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726866854416950516" to /tmp/TestFunctionalparallelMountCmdany-port3508856979/001/test-1726866854416950516
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-523822 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (263.923005ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 21:14:14.681204   16785 retry.go:31] will retry after 622.868185ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 21:14 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 21:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 21:14 test-1726866854416950516
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh cat /mount-9p/test-1726866854416950516
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-523822 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d4b3e9a2-da87-4c07-8274-5e69cc37136e] Pending
helpers_test.go:344: "busybox-mount" [d4b3e9a2-da87-4c07-8274-5e69cc37136e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d4b3e9a2-da87-4c07-8274-5e69cc37136e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d4b3e9a2-da87-4c07-8274-5e69cc37136e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004849076s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-523822 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-523822 /tmp/TestFunctionalparallelMountCmdany-port3508856979/001:/mount-9p --alsologtostderr -v=1] ...
I0920 21:14:21.890873   16785 retry.go:31] will retry after 1.862613518s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:010479b3-c729-45a0-bf64-682463f5d385 ResourceVersion:684 Generation:0 CreationTimestamp:2024-09-20 21:14:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0023b32e0 VolumeMode:0xc0023b32f0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "375.444721ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "48.821816ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image load --daemon kicbase/echo-server:functional-523822 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-523822 image load --daemon kicbase/echo-server:functional-523822 --alsologtostderr: (1.079806504s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-523822
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image load --daemon kicbase/echo-server:functional-523822 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image save kicbase/echo-server:functional-523822 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image rm kicbase/echo-server:functional-523822 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-523822
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 image save --daemon kicbase/echo-server:functional-523822 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-523822
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-523822 /tmp/TestFunctionalparallelMountCmdspecific-port248028735/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-523822 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (203.568947ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 21:14:22.401551   16785 retry.go:31] will retry after 491.140449ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-523822 /tmp/TestFunctionalparallelMountCmdspecific-port248028735/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-523822 ssh "sudo umount -f /mount-9p": exit status 1 (192.268949ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-523822 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-523822 /tmp/TestFunctionalparallelMountCmdspecific-port248028735/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-523822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268155249/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-523822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268155249/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-523822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268155249/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-523822 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-523822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268155249/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-523822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268155249/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-523822 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1268155249/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 service list -o json
functional_test.go:1494: Took "322.465177ms" to run "out/minikube-linux-amd64 -p functional-523822 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.40:30507
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-523822 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.40:30507
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-523822
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-523822
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-523822
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-797954 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 21:15:09.132166   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-797954 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m19.871001157s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (200.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-797954 -- rollout status deployment/busybox: (3.794442212s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-44q29 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-5m4df -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-s4w7f -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-44q29 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-5m4df -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-s4w7f -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-44q29 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-5m4df -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-s4w7f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-44q29 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-44q29 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-5m4df -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-5m4df -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-s4w7f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-797954 -- exec busybox-7dff88458-s4w7f -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-797954 -v=7 --alsologtostderr
E0920 21:19:11.809173   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:19:11.815599   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:19:11.827056   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:19:11.848462   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:19:11.889892   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:19:11.971364   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:19:12.132661   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:19:12.454179   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:19:13.095703   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:19:14.377676   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:19:16.939341   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-797954 -v=7 --alsologtostderr: (55.562720317s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr
E0920 21:19:22.061280   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-797954 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp testdata/cp-test.txt ha-797954:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3341505942/001/cp-test_ha-797954.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954:/home/docker/cp-test.txt ha-797954-m02:/home/docker/cp-test_ha-797954_ha-797954-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m02 "sudo cat /home/docker/cp-test_ha-797954_ha-797954-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954:/home/docker/cp-test.txt ha-797954-m03:/home/docker/cp-test_ha-797954_ha-797954-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m03 "sudo cat /home/docker/cp-test_ha-797954_ha-797954-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954:/home/docker/cp-test.txt ha-797954-m04:/home/docker/cp-test_ha-797954_ha-797954-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m04 "sudo cat /home/docker/cp-test_ha-797954_ha-797954-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp testdata/cp-test.txt ha-797954-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3341505942/001/cp-test_ha-797954-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954-m02:/home/docker/cp-test.txt ha-797954:/home/docker/cp-test_ha-797954-m02_ha-797954.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954 "sudo cat /home/docker/cp-test_ha-797954-m02_ha-797954.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954-m02:/home/docker/cp-test.txt ha-797954-m03:/home/docker/cp-test_ha-797954-m02_ha-797954-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m03 "sudo cat /home/docker/cp-test_ha-797954-m02_ha-797954-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954-m02:/home/docker/cp-test.txt ha-797954-m04:/home/docker/cp-test_ha-797954-m02_ha-797954-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m04 "sudo cat /home/docker/cp-test_ha-797954-m02_ha-797954-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp testdata/cp-test.txt ha-797954-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3341505942/001/cp-test_ha-797954-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt ha-797954:/home/docker/cp-test_ha-797954-m03_ha-797954.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954 "sudo cat /home/docker/cp-test_ha-797954-m03_ha-797954.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt ha-797954-m02:/home/docker/cp-test_ha-797954-m03_ha-797954-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m02 "sudo cat /home/docker/cp-test_ha-797954-m03_ha-797954-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954-m03:/home/docker/cp-test.txt ha-797954-m04:/home/docker/cp-test_ha-797954-m03_ha-797954-m04.txt
E0920 21:19:32.302805   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m04 "sudo cat /home/docker/cp-test_ha-797954-m03_ha-797954-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp testdata/cp-test.txt ha-797954-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3341505942/001/cp-test_ha-797954-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt ha-797954:/home/docker/cp-test_ha-797954-m04_ha-797954.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954 "sudo cat /home/docker/cp-test_ha-797954-m04_ha-797954.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt ha-797954-m02:/home/docker/cp-test_ha-797954-m04_ha-797954-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m02 "sudo cat /home/docker/cp-test_ha-797954-m04_ha-797954-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 cp ha-797954-m04:/home/docker/cp-test.txt ha-797954-m03:/home/docker/cp-test_ha-797954-m04_ha-797954-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 ssh -n ha-797954-m03 "sudo cat /home/docker/cp-test_ha-797954-m04_ha-797954-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.135202038s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-797954 node delete m03 -v=7 --alsologtostderr: (16.058066717s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (351.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-797954 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 21:34:11.809820   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:35:09.132102   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:35:34.872401   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-797954 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m50.253717731s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (351.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-797954 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-797954 --control-plane -v=7 --alsologtostderr: (1m12.910781094s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-797954 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-158495 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0920 21:39:11.809405   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-158495 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m24.037945312s)
--- PASS: TestJSONOutput/start/Command (84.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-158495 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-158495 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-158495 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-158495 --output=json --user=testUser: (7.328562223s)
--- PASS: TestJSONOutput/stop/Command (7.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-342228 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-342228 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.279013ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4b975a6e-0dd7-4145-a759-25514529de25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-342228] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"57658580-2f4a-4c9b-867b-bbcc284372b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"5645a04e-1828-4e5b-b0e3-590003136ec8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f4268b8b-e0fe-4391-bbd2-cf9e3c9ac681","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig"}}
	{"specversion":"1.0","id":"dfe08e37-d8cb-4995-8f1b-fe2aace4ffb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube"}}
	{"specversion":"1.0","id":"933def90-665f-442c-80ce-8eb8f0a01af1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ec2471c3-0c2d-44c2-9961-e18b0cad0f94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f3704942-8320-4808-afee-0c1c15fff9b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-342228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-342228
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (91.2s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-057942 --driver=kvm2  --container-runtime=crio
E0920 21:40:09.138280   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-057942 --driver=kvm2  --container-runtime=crio: (47.180390287s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-071882 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-071882 --driver=kvm2  --container-runtime=crio: (41.010167035s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-057942
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-071882
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-071882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-071882
helpers_test.go:175: Cleaning up "first-057942" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-057942
--- PASS: TestMinikubeProfile (91.20s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (24.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-971308 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-971308 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (23.962355317s)
--- PASS: TestMountStart/serial/StartWithMountFirst (24.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-971308 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-971308 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (29.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-987473 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-987473 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.101880375s)
--- PASS: TestMountStart/serial/StartWithMountSecond (29.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-987473 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-987473 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-971308 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-987473 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-987473 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-987473
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-987473: (1.267337189s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-987473
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-987473: (23.043355933s)
--- PASS: TestMountStart/serial/RestartStopped (24.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-987473 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-987473 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (112.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-900777 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 21:44:11.809440   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-900777 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m52.572367474s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (112.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-900777 -- rollout status deployment/busybox: (3.662529781s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- exec busybox-7dff88458-bktzd -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- exec busybox-7dff88458-ggs4l -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- exec busybox-7dff88458-bktzd -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- exec busybox-7dff88458-ggs4l -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- exec busybox-7dff88458-bktzd -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- exec busybox-7dff88458-ggs4l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- exec busybox-7dff88458-bktzd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- exec busybox-7dff88458-bktzd -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- exec busybox-7dff88458-ggs4l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-900777 -- exec busybox-7dff88458-ggs4l -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-900777 -v 3 --alsologtostderr
E0920 21:45:09.131914   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-900777 -v 3 --alsologtostderr: (49.444005533s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.02s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-900777 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 cp testdata/cp-test.txt multinode-900777:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 cp multinode-900777:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile8418019/001/cp-test_multinode-900777.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 cp multinode-900777:/home/docker/cp-test.txt multinode-900777-m02:/home/docker/cp-test_multinode-900777_multinode-900777-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777-m02 "sudo cat /home/docker/cp-test_multinode-900777_multinode-900777-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 cp multinode-900777:/home/docker/cp-test.txt multinode-900777-m03:/home/docker/cp-test_multinode-900777_multinode-900777-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777-m03 "sudo cat /home/docker/cp-test_multinode-900777_multinode-900777-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 cp testdata/cp-test.txt multinode-900777-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 cp multinode-900777-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile8418019/001/cp-test_multinode-900777-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 cp multinode-900777-m02:/home/docker/cp-test.txt multinode-900777:/home/docker/cp-test_multinode-900777-m02_multinode-900777.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777 "sudo cat /home/docker/cp-test_multinode-900777-m02_multinode-900777.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 cp multinode-900777-m02:/home/docker/cp-test.txt multinode-900777-m03:/home/docker/cp-test_multinode-900777-m02_multinode-900777-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777-m03 "sudo cat /home/docker/cp-test_multinode-900777-m02_multinode-900777-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 cp testdata/cp-test.txt multinode-900777-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 cp multinode-900777-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile8418019/001/cp-test_multinode-900777-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 cp multinode-900777-m03:/home/docker/cp-test.txt multinode-900777:/home/docker/cp-test_multinode-900777-m03_multinode-900777.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777 "sudo cat /home/docker/cp-test_multinode-900777-m03_multinode-900777.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 cp multinode-900777-m03:/home/docker/cp-test.txt multinode-900777-m02:/home/docker/cp-test_multinode-900777-m03_multinode-900777-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 ssh -n multinode-900777-m02 "sudo cat /home/docker/cp-test_multinode-900777-m03_multinode-900777-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-900777 node stop m03: (1.450921095s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-900777 status: exit status 7 (420.269392ms)

                                                
                                                
-- stdout --
	multinode-900777
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-900777-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-900777-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-900777 status --alsologtostderr: exit status 7 (419.647862ms)

                                                
                                                
-- stdout --
	multinode-900777
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-900777-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-900777-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 21:45:39.285336   46729 out.go:345] Setting OutFile to fd 1 ...
	I0920 21:45:39.285588   46729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:45:39.285599   46729 out.go:358] Setting ErrFile to fd 2...
	I0920 21:45:39.285605   46729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 21:45:39.285824   46729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 21:45:39.286002   46729 out.go:352] Setting JSON to false
	I0920 21:45:39.286038   46729 mustload.go:65] Loading cluster: multinode-900777
	I0920 21:45:39.286135   46729 notify.go:220] Checking for updates...
	I0920 21:45:39.286558   46729 config.go:182] Loaded profile config "multinode-900777": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 21:45:39.286580   46729 status.go:174] checking status of multinode-900777 ...
	I0920 21:45:39.287065   46729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:45:39.287114   46729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:45:39.305525   46729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38767
	I0920 21:45:39.305941   46729 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:45:39.306611   46729 main.go:141] libmachine: Using API Version  1
	I0920 21:45:39.306639   46729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:45:39.307047   46729 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:45:39.307247   46729 main.go:141] libmachine: (multinode-900777) Calling .GetState
	I0920 21:45:39.308741   46729 status.go:364] multinode-900777 host status = "Running" (err=<nil>)
	I0920 21:45:39.308757   46729 host.go:66] Checking if "multinode-900777" exists ...
	I0920 21:45:39.309151   46729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:45:39.309192   46729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:45:39.324376   46729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I0920 21:45:39.324719   46729 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:45:39.325154   46729 main.go:141] libmachine: Using API Version  1
	I0920 21:45:39.325172   46729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:45:39.325460   46729 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:45:39.325617   46729 main.go:141] libmachine: (multinode-900777) Calling .GetIP
	I0920 21:45:39.328532   46729 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:45:39.328905   46729 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:45:39.328955   46729 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:45:39.329081   46729 host.go:66] Checking if "multinode-900777" exists ...
	I0920 21:45:39.329465   46729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:45:39.329513   46729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:45:39.344894   46729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35311
	I0920 21:45:39.345333   46729 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:45:39.345871   46729 main.go:141] libmachine: Using API Version  1
	I0920 21:45:39.345897   46729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:45:39.346301   46729 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:45:39.346508   46729 main.go:141] libmachine: (multinode-900777) Calling .DriverName
	I0920 21:45:39.346699   46729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 21:45:39.346735   46729 main.go:141] libmachine: (multinode-900777) Calling .GetSSHHostname
	I0920 21:45:39.349364   46729 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:45:39.349719   46729 main.go:141] libmachine: (multinode-900777) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:17:18", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:42:55 +0000 UTC Type:0 Mac:52:54:00:15:17:18 Iaid: IPaddr:192.168.39.150 Prefix:24 Hostname:multinode-900777 Clientid:01:52:54:00:15:17:18}
	I0920 21:45:39.349749   46729 main.go:141] libmachine: (multinode-900777) DBG | domain multinode-900777 has defined IP address 192.168.39.150 and MAC address 52:54:00:15:17:18 in network mk-multinode-900777
	I0920 21:45:39.349850   46729 main.go:141] libmachine: (multinode-900777) Calling .GetSSHPort
	I0920 21:45:39.350027   46729 main.go:141] libmachine: (multinode-900777) Calling .GetSSHKeyPath
	I0920 21:45:39.350172   46729 main.go:141] libmachine: (multinode-900777) Calling .GetSSHUsername
	I0920 21:45:39.350327   46729 sshutil.go:53] new ssh client: &{IP:192.168.39.150 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/multinode-900777/id_rsa Username:docker}
	I0920 21:45:39.434730   46729 ssh_runner.go:195] Run: systemctl --version
	I0920 21:45:39.441531   46729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:45:39.456100   46729 kubeconfig.go:125] found "multinode-900777" server: "https://192.168.39.150:8443"
	I0920 21:45:39.456132   46729 api_server.go:166] Checking apiserver status ...
	I0920 21:45:39.456163   46729 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 21:45:39.469801   46729 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1068/cgroup
	W0920 21:45:39.479203   46729 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1068/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 21:45:39.479277   46729 ssh_runner.go:195] Run: ls
	I0920 21:45:39.483716   46729 api_server.go:253] Checking apiserver healthz at https://192.168.39.150:8443/healthz ...
	I0920 21:45:39.487608   46729 api_server.go:279] https://192.168.39.150:8443/healthz returned 200:
	ok
	I0920 21:45:39.487629   46729 status.go:456] multinode-900777 apiserver status = Running (err=<nil>)
	I0920 21:45:39.487637   46729 status.go:176] multinode-900777 status: &{Name:multinode-900777 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:45:39.487653   46729 status.go:174] checking status of multinode-900777-m02 ...
	I0920 21:45:39.487930   46729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:45:39.487964   46729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:45:39.503001   46729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37275
	I0920 21:45:39.503435   46729 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:45:39.503889   46729 main.go:141] libmachine: Using API Version  1
	I0920 21:45:39.503910   46729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:45:39.504247   46729 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:45:39.504411   46729 main.go:141] libmachine: (multinode-900777-m02) Calling .GetState
	I0920 21:45:39.506068   46729 status.go:364] multinode-900777-m02 host status = "Running" (err=<nil>)
	I0920 21:45:39.506085   46729 host.go:66] Checking if "multinode-900777-m02" exists ...
	I0920 21:45:39.506379   46729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:45:39.506413   46729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:45:39.522120   46729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46367
	I0920 21:45:39.522577   46729 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:45:39.523111   46729 main.go:141] libmachine: Using API Version  1
	I0920 21:45:39.523134   46729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:45:39.523422   46729 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:45:39.523588   46729 main.go:141] libmachine: (multinode-900777-m02) Calling .GetIP
	I0920 21:45:39.526420   46729 main.go:141] libmachine: (multinode-900777-m02) DBG | domain multinode-900777-m02 has defined MAC address 52:54:00:46:03:20 in network mk-multinode-900777
	I0920 21:45:39.526812   46729 main.go:141] libmachine: (multinode-900777-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:03:20", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:43:57 +0000 UTC Type:0 Mac:52:54:00:46:03:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:multinode-900777-m02 Clientid:01:52:54:00:46:03:20}
	I0920 21:45:39.526841   46729 main.go:141] libmachine: (multinode-900777-m02) DBG | domain multinode-900777-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:46:03:20 in network mk-multinode-900777
	I0920 21:45:39.526977   46729 host.go:66] Checking if "multinode-900777-m02" exists ...
	I0920 21:45:39.527312   46729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:45:39.527353   46729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:45:39.543045   46729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36231
	I0920 21:45:39.543445   46729 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:45:39.543911   46729 main.go:141] libmachine: Using API Version  1
	I0920 21:45:39.543933   46729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:45:39.544256   46729 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:45:39.544464   46729 main.go:141] libmachine: (multinode-900777-m02) Calling .DriverName
	I0920 21:45:39.544620   46729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 21:45:39.544639   46729 main.go:141] libmachine: (multinode-900777-m02) Calling .GetSSHHostname
	I0920 21:45:39.547244   46729 main.go:141] libmachine: (multinode-900777-m02) DBG | domain multinode-900777-m02 has defined MAC address 52:54:00:46:03:20 in network mk-multinode-900777
	I0920 21:45:39.547632   46729 main.go:141] libmachine: (multinode-900777-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:46:03:20", ip: ""} in network mk-multinode-900777: {Iface:virbr1 ExpiryTime:2024-09-20 22:43:57 +0000 UTC Type:0 Mac:52:54:00:46:03:20 Iaid: IPaddr:192.168.39.27 Prefix:24 Hostname:multinode-900777-m02 Clientid:01:52:54:00:46:03:20}
	I0920 21:45:39.547676   46729 main.go:141] libmachine: (multinode-900777-m02) DBG | domain multinode-900777-m02 has defined IP address 192.168.39.27 and MAC address 52:54:00:46:03:20 in network mk-multinode-900777
	I0920 21:45:39.547762   46729 main.go:141] libmachine: (multinode-900777-m02) Calling .GetSSHPort
	I0920 21:45:39.547927   46729 main.go:141] libmachine: (multinode-900777-m02) Calling .GetSSHKeyPath
	I0920 21:45:39.548067   46729 main.go:141] libmachine: (multinode-900777-m02) Calling .GetSSHUsername
	I0920 21:45:39.548182   46729 sshutil.go:53] new ssh client: &{IP:192.168.39.27 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19672-9614/.minikube/machines/multinode-900777-m02/id_rsa Username:docker}
	I0920 21:45:39.629988   46729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 21:45:39.644940   46729 status.go:176] multinode-900777-m02 status: &{Name:multinode-900777-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 21:45:39.644982   46729 status.go:174] checking status of multinode-900777-m03 ...
	I0920 21:45:39.645344   46729 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0920 21:45:39.645384   46729 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0920 21:45:39.660615   46729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41339
	I0920 21:45:39.661064   46729 main.go:141] libmachine: () Calling .GetVersion
	I0920 21:45:39.661530   46729 main.go:141] libmachine: Using API Version  1
	I0920 21:45:39.661545   46729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0920 21:45:39.661820   46729 main.go:141] libmachine: () Calling .GetMachineName
	I0920 21:45:39.662008   46729 main.go:141] libmachine: (multinode-900777-m03) Calling .GetState
	I0920 21:45:39.663607   46729 status.go:364] multinode-900777-m03 host status = "Stopped" (err=<nil>)
	I0920 21:45:39.663618   46729 status.go:377] host is not running, skipping remaining checks
	I0920 21:45:39.663623   46729 status.go:176] multinode-900777-m03 status: &{Name:multinode-900777-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-900777 node start m03 -v=7 --alsologtostderr: (38.523818523s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-900777 node delete m03: (1.648046761s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-900777 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0920 21:54:52.208107   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
E0920 21:55:09.137997   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-900777 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m58.341744194s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-900777 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.87s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-900777
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-900777-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-900777-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (63.577011ms)

                                                
                                                
-- stdout --
	* [multinode-900777-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-900777-m02' is duplicated with machine name 'multinode-900777-m02' in profile 'multinode-900777'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-900777-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-900777-m03 --driver=kvm2  --container-runtime=crio: (43.715697023s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-900777
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-900777: exit status 80 (214.041417ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-900777 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-900777-m03 already exists in multinode-900777-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-900777-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.85s)

                                                
                                    
x
+
TestScheduledStopUnix (113.61s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-459426 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-459426 --memory=2048 --driver=kvm2  --container-runtime=crio: (42.035300936s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-459426 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-459426 -n scheduled-stop-459426
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-459426 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 22:03:18.261279   16785 retry.go:31] will retry after 51.56µs: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.262445   16785 retry.go:31] will retry after 117.265µs: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.263613   16785 retry.go:31] will retry after 261.859µs: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.264759   16785 retry.go:31] will retry after 256.406µs: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.265913   16785 retry.go:31] will retry after 553.773µs: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.267029   16785 retry.go:31] will retry after 933.866µs: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.268177   16785 retry.go:31] will retry after 1.654914ms: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.270450   16785 retry.go:31] will retry after 2.103958ms: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.273682   16785 retry.go:31] will retry after 1.974854ms: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.275951   16785 retry.go:31] will retry after 2.493631ms: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.279176   16785 retry.go:31] will retry after 4.959292ms: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.284400   16785 retry.go:31] will retry after 8.6387ms: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.293648   16785 retry.go:31] will retry after 7.407112ms: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.301921   16785 retry.go:31] will retry after 24.606276ms: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
I0920 22:03:18.327201   16785 retry.go:31] will retry after 19.152638ms: open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/scheduled-stop-459426/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-459426 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-459426 -n scheduled-stop-459426
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-459426
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-459426 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0920 22:04:11.810253   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-459426
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-459426: exit status 7 (64.41284ms)

                                                
                                                
-- stdout --
	scheduled-stop-459426
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-459426 -n scheduled-stop-459426
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-459426 -n scheduled-stop-459426: exit status 7 (63.924005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-459426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-459426
--- PASS: TestScheduledStopUnix (113.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (174.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.672206141 start -p running-upgrade-850418 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.672206141 start -p running-upgrade-850418 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m18.403321202s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-850418 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-850418 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m34.905097312s)
helpers_test.go:175: Cleaning up "running-upgrade-850418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-850418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-850418: (1.135213477s)
--- PASS: TestRunningBinaryUpgrade (174.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-614859 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-614859 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (81.147435ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-614859] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (126.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-614859 --driver=kvm2  --container-runtime=crio
E0920 22:05:09.132328   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-614859 --driver=kvm2  --container-runtime=crio: (2m5.998410372s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-614859 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (126.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (147.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1714947776 start -p stopped-upgrade-541514 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1714947776 start -p stopped-upgrade-541514 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m38.156551025s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1714947776 -p stopped-upgrade-541514 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1714947776 -p stopped-upgrade-541514 stop: (2.131176474s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-541514 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-541514 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (47.184641996s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (147.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-614859 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-614859 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.179233817s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-614859 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-614859 status -o json: exit status 2 (250.587976ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-614859","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-614859
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-614859: (1.278690033s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (46.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-614859 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-614859 --no-kubernetes --driver=kvm2  --container-runtime=crio: (46.184070789s)
--- PASS: TestNoKubernetes/serial/Start (46.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-541514
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestPause/serial/Start (88.56s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-576835 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-576835 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m28.561458291s)
--- PASS: TestPause/serial/Start (88.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-614859 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-614859 "sudo systemctl is-active --quiet service kubelet": exit status 1 (262.007927ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-614859
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-614859: (1.54384351s)
--- PASS: TestNoKubernetes/serial/Stop (1.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (43.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-614859 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-614859 --driver=kvm2  --container-runtime=crio: (43.98713373s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (43.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-614859 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-614859 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.149461ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (48.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-576835 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-576835 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.389085251s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (48.42s)

                                                
                                    
x
+
TestPause/serial/Pause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-576835 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.89s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-576835 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-576835 --output=json --layout=cluster: exit status 2 (286.228272ms)

                                                
                                                
-- stdout --
	{"Name":"pause-576835","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-576835","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-576835 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.72s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.08s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-576835 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-576835 --alsologtostderr -v=5: (1.078990719s)
--- PASS: TestPause/serial/PauseAgain (1.08s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.28s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-576835 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-576835 --alsologtostderr -v=5: (1.282794446s)
--- PASS: TestPause/serial/DeletePaused (1.28s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.65s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-833441 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-833441 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (115.881688ms)

                                                
                                                
-- stdout --
	* [false-833441] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19672
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 22:10:23.814600   59583 out.go:345] Setting OutFile to fd 1 ...
	I0920 22:10:23.814718   59583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:10:23.814728   59583 out.go:358] Setting ErrFile to fd 2...
	I0920 22:10:23.814732   59583 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 22:10:23.814924   59583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19672-9614/.minikube/bin
	I0920 22:10:23.815549   59583 out.go:352] Setting JSON to false
	I0920 22:10:23.816556   59583 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6772,"bootTime":1726863452,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 22:10:23.816655   59583 start.go:139] virtualization: kvm guest
	I0920 22:10:23.819424   59583 out.go:177] * [false-833441] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 22:10:23.821115   59583 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 22:10:23.821169   59583 notify.go:220] Checking for updates...
	I0920 22:10:23.823933   59583 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 22:10:23.825398   59583 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19672-9614/kubeconfig
	I0920 22:10:23.826708   59583 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19672-9614/.minikube
	I0920 22:10:23.827988   59583 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 22:10:23.829347   59583 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 22:10:23.831370   59583 config.go:182] Loaded profile config "cert-expiration-673228": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:10:23.831521   59583 config.go:182] Loaded profile config "cert-options-335786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:10:23.831674   59583 config.go:182] Loaded profile config "kubernetes-upgrade-696728": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0920 22:10:23.831794   59583 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 22:10:23.871237   59583 out.go:177] * Using the kvm2 driver based on user configuration
	I0920 22:10:23.872955   59583 start.go:297] selected driver: kvm2
	I0920 22:10:23.872974   59583 start.go:901] validating driver "kvm2" against <nil>
	I0920 22:10:23.872986   59583 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 22:10:23.875237   59583 out.go:201] 
	W0920 22:10:23.876750   59583 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0920 22:10:23.878165   59583 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-833441 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-833441

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-833441

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-833441

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-833441

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-833441

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-833441

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-833441

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-833441

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-833441

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-833441

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-833441

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-833441" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-833441" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 22:09:35 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.117:8443
name: cert-expiration-673228
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 22:10:17 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.192:8443
name: kubernetes-upgrade-696728
contexts:
- context:
cluster: cert-expiration-673228
extensions:
- extension:
last-update: Fri, 20 Sep 2024 22:09:35 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-673228
name: cert-expiration-673228
- context:
cluster: kubernetes-upgrade-696728
user: kubernetes-upgrade-696728
name: kubernetes-upgrade-696728
current-context: kubernetes-upgrade-696728
kind: Config
preferences: {}
users:
- name: cert-expiration-673228
user:
client-certificate: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/cert-expiration-673228/client.crt
client-key: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/cert-expiration-673228/client.key
- name: kubernetes-upgrade-696728
user:
client-certificate: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.crt
client-key: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-833441

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-833441"

                                                
                                                
----------------------- debugLogs end: false-833441 [took: 3.028261209s] --------------------------------
helpers_test.go:175: Cleaning up "false-833441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-833441
--- PASS: TestNetworkPlugins/group/false (3.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-032884 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-032884 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m18.542407662s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (103.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-562750 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 22:11:32.209668   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-562750 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m43.445496964s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (103.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-032884 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [01bf6766-b772-49bf-b56e-bea5b7039087] Pending
helpers_test.go:344: "busybox" [01bf6766-b772-49bf-b56e-bea5b7039087] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [01bf6766-b772-49bf-b56e-bea5b7039087] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004141918s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-032884 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-032884 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-032884 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-562750 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [246386c0-503a-4f46-8c53-5ce47131b001] Pending
helpers_test.go:344: "busybox" [246386c0-503a-4f46-8c53-5ce47131b001] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [246386c0-503a-4f46-8c53-5ce47131b001] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.005674861s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-562750 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-799783 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-799783 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (55.969080962s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-562750 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-562750 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-799783 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b7ab952b-db21-406d-b6a2-eedcd2ddb833] Pending
helpers_test.go:344: "busybox" [b7ab952b-db21-406d-b6a2-eedcd2ddb833] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b7ab952b-db21-406d-b6a2-eedcd2ddb833] Running
E0920 22:14:11.809529   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004268757s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-799783 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-799783 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-799783 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (681.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-032884 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 22:15:09.132744   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-032884 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (11m21.347526266s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-032884 -n no-preload-032884
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (681.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (530.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-562750 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-562750 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (8m50.058153007s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-562750 -n embed-certs-562750
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (530.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (526.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-799783 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-799783 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (8m45.773001839s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-799783 -n default-k8s-diff-port-799783
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (526.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-593412 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-593412 --alsologtostderr -v=3: (6.302996618s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-593412 -n old-k8s-version-593412: exit status 7 (62.779594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-593412 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (49.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-402676 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-402676 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (49.476270824s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (49.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m23.375741946s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (65.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m5.109129336s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (65.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-402676 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-402676 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.138114202s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-402676 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-402676 --alsologtostderr -v=3: (10.611756475s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-402676 -n newest-cni-402676
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-402676 -n newest-cni-402676: exit status 7 (85.463527ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-402676 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (50.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-402676 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E0920 22:42:14.882379   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/functional-523822/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:42:15.362923   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:42:15.369469   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:42:15.380898   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:42:15.402353   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:42:15.443784   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:42:15.526067   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:42:15.687665   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:42:16.009183   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:42:16.650824   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:42:17.932207   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:42:20.493663   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-402676 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (50.614114274s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-402676 -n newest-cni-402676
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (50.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-833441 "pgrep -a kubelet"
I0920 22:42:23.523185   16785 config.go:182] Loaded profile config "auto-833441": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-833441 replace --force -f testdata/netcat-deployment.yaml
I0920 22:42:23.801551   16785 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qfcfv" [ea5bd37d-7c73-4a5a-97af-8c618dec94a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 22:42:25.615063   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-qfcfv" [ea5bd37d-7c73-4a5a-97af-8c618dec94a3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.00504484s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-402676 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-402676 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-402676 -n newest-cni-402676
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-402676 -n newest-cni-402676: exit status 2 (252.724677ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-402676 -n newest-cni-402676
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-402676 -n newest-cni-402676: exit status 2 (247.465923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-402676 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-402676 -n newest-cni-402676
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-402676 -n newest-cni-402676
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ts96r" [7dd49b20-d9e3-4298-a854-1a52be0c21c8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004429595s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (75.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m15.402677041s)
--- PASS: TestNetworkPlugins/group/calico/Start (75.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-833441 "pgrep -a kubelet"
I0920 22:42:35.809669   16785 config.go:182] Loaded profile config "kindnet-833441": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-833441 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cvrcc" [43c13034-4e34-4b13-9fd9-8a575bee9590] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cvrcc" [43c13034-4e34-4b13-9fd9-8a575bee9590] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005751847s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-833441 exec deployment/netcat -- nslookup kubernetes.default
E0920 22:42:35.856934   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
I0920 22:42:36.071595   16785 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-833441 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (81.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0920 22:42:56.338480   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m21.59943921s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (81.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (118.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m58.741121789s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (118.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (116.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0920 22:43:37.300151   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/no-preload-032884/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m56.182283394s)
--- PASS: TestNetworkPlugins/group/flannel/Start (116.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9twtg" [bebc9ca4-c1f7-4f5c-a615-6eca5d98dcd9] Running
E0920 22:43:51.354523   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:43:51.360920   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:43:51.372317   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:43:51.393779   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:43:51.435279   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:43:51.516743   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:43:51.678675   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:43:52.000579   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:43:52.642299   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005463759s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-833441 "pgrep -a kubelet"
I0920 22:43:53.888396   16785 config.go:182] Loaded profile config "calico-833441": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-833441 replace --force -f testdata/netcat-deployment.yaml
E0920 22:43:53.923920   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bs9vm" [74186622-ecf8-4231-8b35-9c58c90c79fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 22:43:56.485618   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-bs9vm" [74186622-ecf8-4231-8b35-9c58c90c79fb] Running
E0920 22:44:01.607356   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:44:03.603327   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:44:03.609781   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:44:03.621277   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:44:03.642733   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:44:03.684218   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:44:03.765717   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:44:03.927477   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:44:04.249568   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.crt: no such file or directory" logger="UnhandledError"
E0920 22:44:04.891338   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005277518s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-833441 exec deployment/netcat -- nslookup kubernetes.default
E0920 22:44:06.173060   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-833441 "pgrep -a kubelet"
I0920 22:44:13.452621   16785 config.go:182] Loaded profile config "custom-flannel-833441": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-833441 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h7qqn" [7c4ecb00-c2f3-4c4d-8bd5-82939ae76fd3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 22:44:13.856607   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-h7qqn" [7c4ecb00-c2f3-4c4d-8bd5-82939ae76fd3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004542939s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (55.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0920 22:44:24.099000   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/default-k8s-diff-port-799783/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-833441 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (55.357013223s)
--- PASS: TestNetworkPlugins/group/bridge/Start (55.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-833441 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-833441 "pgrep -a kubelet"
I0920 22:45:01.287783   16785 config.go:182] Loaded profile config "enable-default-cni-833441": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-833441 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-k2ppd" [56e0c19b-6ed2-4f25-90dc-4e15cafafe8a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-k2ppd" [56e0c19b-6ed2-4f25-90dc-4e15cafafe8a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.006778463s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rs4sf" [bbb858f2-ebf4-4116-95a4-9acbe8c93d7a] Running
E0920 22:45:09.132680   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/addons-628754/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005540501s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-833441 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-833441 "pgrep -a kubelet"
I0920 22:45:13.075000   16785 config.go:182] Loaded profile config "flannel-833441": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-833441 replace --force -f testdata/netcat-deployment.yaml
E0920 22:45:13.292731   16785 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/old-k8s-version-593412/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8jqg2" [bd55c04c-9e09-44ff-9dd8-d46f47fe695d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8jqg2" [bd55c04c-9e09-44ff-9dd8-d46f47fe695d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003417731s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-833441 "pgrep -a kubelet"
I0920 22:45:19.419226   16785 config.go:182] Loaded profile config "bridge-833441": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-833441 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ttjvc" [59d6b5bc-3aae-4a25-9056-8707219499a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ttjvc" [59d6b5bc-3aae-4a25-9056-8707219499a5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003810371s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-833441 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (21.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-833441 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-833441 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.165888538s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 22:45:45.843814   16785 retry.go:31] will retry after 1.131090461s: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context bridge-833441 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-833441 exec deployment/netcat -- nslookup kubernetes.default: (5.139035626s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (21.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-833441 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (37/317)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0
37 TestAddons/parallel/Olm 0
47 TestDockerFlags 0
50 TestDockerEnvContainerd 0
52 TestHyperKitDriverInstallOrUpdate 0
53 TestHyperkitDriverSkipUpgrade 0
104 TestFunctional/parallel/DockerEnv 0
105 TestFunctional/parallel/PodmanEnv 0
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
153 TestGvisorAddon 0
175 TestImageBuild 0
202 TestKicCustomNetwork 0
203 TestKicExistingNetwork 0
204 TestKicCustomSubnet 0
205 TestKicStaticIP 0
237 TestChangeNoneUser 0
240 TestScheduledStopWindows 0
242 TestSkaffold 0
244 TestInsufficientStorage 0
248 TestMissingContainerUpgrade 0
262 TestStartStop/group/disable-driver-mounts 0.15
280 TestNetworkPlugins/group/kubenet 3.41
288 TestNetworkPlugins/group/cilium 3.21
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-440582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-440582
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-833441 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-833441

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-833441

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-833441

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-833441

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-833441

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-833441

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-833441

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-833441

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-833441

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-833441

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-833441

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-833441" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-833441" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 22:09:35 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.117:8443
name: cert-expiration-673228
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 22:10:17 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.192:8443
name: kubernetes-upgrade-696728
contexts:
- context:
cluster: cert-expiration-673228
extensions:
- extension:
last-update: Fri, 20 Sep 2024 22:09:35 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-673228
name: cert-expiration-673228
- context:
cluster: kubernetes-upgrade-696728
user: kubernetes-upgrade-696728
name: kubernetes-upgrade-696728
current-context: kubernetes-upgrade-696728
kind: Config
preferences: {}
users:
- name: cert-expiration-673228
user:
client-certificate: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/cert-expiration-673228/client.crt
client-key: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/cert-expiration-673228/client.key
- name: kubernetes-upgrade-696728
user:
client-certificate: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.crt
client-key: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-833441

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-833441"

                                                
                                                
----------------------- debugLogs end: kubenet-833441 [took: 3.238139863s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-833441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-833441
--- SKIP: TestNetworkPlugins/group/kubenet (3.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-833441 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-833441" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 22:09:35 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.39.117:8443
name: cert-expiration-673228
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19672-9614/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 22:10:26 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.50.192:8443
name: kubernetes-upgrade-696728
contexts:
- context:
cluster: cert-expiration-673228
extensions:
- extension:
last-update: Fri, 20 Sep 2024 22:09:35 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-673228
name: cert-expiration-673228
- context:
cluster: kubernetes-upgrade-696728
extensions:
- extension:
last-update: Fri, 20 Sep 2024 22:10:26 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: kubernetes-upgrade-696728
name: kubernetes-upgrade-696728
current-context: kubernetes-upgrade-696728
kind: Config
preferences: {}
users:
- name: cert-expiration-673228
user:
client-certificate: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/cert-expiration-673228/client.crt
client-key: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/cert-expiration-673228/client.key
- name: kubernetes-upgrade-696728
user:
client-certificate: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.crt
client-key: /home/jenkins/minikube-integration/19672-9614/.minikube/profiles/kubernetes-upgrade-696728/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-833441

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-833441" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-833441"

                                                
                                                
----------------------- debugLogs end: cilium-833441 [took: 3.065799176s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-833441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-833441
--- SKIP: TestNetworkPlugins/group/cilium (3.21s)

                                                
                                    
Copied to clipboard